2025-08-29 14:06:02.196102 | Job console starting 2025-08-29 14:06:02.217125 | Updating git repos 2025-08-29 14:06:02.293036 | Cloning repos into workspace 2025-08-29 14:06:02.476080 | Restoring repo states 2025-08-29 14:06:02.498641 | Merging changes 2025-08-29 14:06:02.498659 | Checking out repos 2025-08-29 14:06:02.757355 | Preparing playbooks 2025-08-29 14:06:03.383172 | Running Ansible setup 2025-08-29 14:06:08.044853 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-08-29 14:06:08.849364 | 2025-08-29 14:06:08.849548 | PLAY [Base pre] 2025-08-29 14:06:08.873634 | 2025-08-29 14:06:08.873790 | TASK [Setup log path fact] 2025-08-29 14:06:08.904448 | orchestrator | ok 2025-08-29 14:06:08.921900 | 2025-08-29 14:06:08.922045 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-08-29 14:06:08.951823 | orchestrator | ok 2025-08-29 14:06:08.964673 | 2025-08-29 14:06:08.964801 | TASK [emit-job-header : Print job information] 2025-08-29 14:06:09.005478 | # Job Information 2025-08-29 14:06:09.005670 | Ansible Version: 2.16.14 2025-08-29 14:06:09.005707 | Job: testbed-deploy-in-a-nutshell-ubuntu-24.04 2025-08-29 14:06:09.005742 | Pipeline: post 2025-08-29 14:06:09.005766 | Executor: 521e9411259a 2025-08-29 14:06:09.005787 | Triggered by: https://github.com/osism/testbed/commit/cd40b8d9aeabc9c007d5e73667eb0ed02c89b73a 2025-08-29 14:06:09.005809 | Event ID: 4784273c-84e1-11f0-9d45-642685911fce 2025-08-29 14:06:09.012847 | 2025-08-29 14:06:09.012969 | LOOP [emit-job-header : Print node information] 2025-08-29 14:06:09.162826 | orchestrator | ok: 2025-08-29 14:06:09.163118 | orchestrator | # Node Information 2025-08-29 14:06:09.163165 | orchestrator | Inventory Hostname: orchestrator 2025-08-29 14:06:09.163198 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-08-29 14:06:09.163226 | orchestrator | Username: zuul-testbed03 2025-08-29 14:06:09.163253 | orchestrator | Distro: Debian 12.11 2025-08-29 14:06:09.163283 | orchestrator | Provider: static-testbed 2025-08-29 14:06:09.163311 | orchestrator | Region: 2025-08-29 14:06:09.163337 | orchestrator | Label: testbed-orchestrator 2025-08-29 14:06:09.163364 | orchestrator | Product Name: OpenStack Nova 2025-08-29 14:06:09.163390 | orchestrator | Interface IP: 81.163.193.140 2025-08-29 14:06:09.175816 | 2025-08-29 14:06:09.175973 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-08-29 14:06:09.788663 | orchestrator -> localhost | changed 2025-08-29 14:06:09.797366 | 2025-08-29 14:06:09.797547 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-08-29 14:06:11.127930 | orchestrator -> localhost | changed 2025-08-29 14:06:11.144262 | 2025-08-29 14:06:11.144440 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-08-29 14:06:11.471760 | orchestrator -> localhost | ok 2025-08-29 14:06:11.486167 | 2025-08-29 14:06:11.486341 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-08-29 14:06:11.518768 | orchestrator | ok 2025-08-29 14:06:11.536750 | orchestrator | included: /var/lib/zuul/builds/3a100108136040079abb46831c0215f4/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-08-29 14:06:11.546741 | 2025-08-29 14:06:11.546878 | TASK [add-build-sshkey : Create Temp SSH key] 2025-08-29 14:06:12.780140 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-08-29 14:06:12.780377 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/3a100108136040079abb46831c0215f4/work/3a100108136040079abb46831c0215f4_id_rsa 2025-08-29 14:06:12.780437 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/3a100108136040079abb46831c0215f4/work/3a100108136040079abb46831c0215f4_id_rsa.pub 2025-08-29 14:06:12.780466 | orchestrator -> localhost | The key fingerprint is: 2025-08-29 14:06:12.780495 | orchestrator -> localhost | SHA256:wSwAToRfT2tmaBfheeLE9t/phn33++zsQYAL7NKDMb8 zuul-build-sshkey 2025-08-29 14:06:12.780519 | orchestrator -> localhost | The key's randomart image is: 2025-08-29 14:06:12.780554 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-08-29 14:06:12.780577 | orchestrator -> localhost | | o+.. .. | 2025-08-29 14:06:12.780599 | orchestrator -> localhost | |.o .+o+. . | 2025-08-29 14:06:12.780620 | orchestrator -> localhost | | ... +O==o . . | 2025-08-29 14:06:12.780641 | orchestrator -> localhost | | . o+B=B.. . . | 2025-08-29 14:06:12.780661 | orchestrator -> localhost | | . =.oS= . . | 2025-08-29 14:06:12.780688 | orchestrator -> localhost | | ..o. .. | 2025-08-29 14:06:12.780709 | orchestrator -> localhost | | E.oo . | 2025-08-29 14:06:12.780728 | orchestrator -> localhost | | ..o .oo| 2025-08-29 14:06:12.780749 | orchestrator -> localhost | | ... =@| 2025-08-29 14:06:12.780769 | orchestrator -> localhost | +----[SHA256]-----+ 2025-08-29 14:06:12.780828 | orchestrator -> localhost | ok: Runtime: 0:00:00.711685 2025-08-29 14:06:12.788672 | 2025-08-29 14:06:12.788792 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-08-29 14:06:12.818163 | orchestrator | ok 2025-08-29 14:06:12.828800 | orchestrator | included: /var/lib/zuul/builds/3a100108136040079abb46831c0215f4/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-08-29 14:06:12.838284 | 2025-08-29 14:06:12.838397 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-08-29 14:06:12.862460 | orchestrator | skipping: Conditional result was False 2025-08-29 14:06:12.870583 | 2025-08-29 14:06:12.870704 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-08-29 14:06:13.885844 | orchestrator | changed 2025-08-29 14:06:13.900703 | 2025-08-29 14:06:13.900845 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-08-29 14:06:14.171055 | orchestrator | ok 2025-08-29 14:06:14.195942 | 2025-08-29 14:06:14.196088 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-08-29 14:06:14.723107 | orchestrator | ok 2025-08-29 14:06:14.732800 | 2025-08-29 14:06:14.732941 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-08-29 14:06:15.092099 | orchestrator | ok 2025-08-29 14:06:15.098890 | 2025-08-29 14:06:15.098999 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-08-29 14:06:15.125132 | orchestrator | skipping: Conditional result was False 2025-08-29 14:06:15.141209 | 2025-08-29 14:06:15.141352 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-08-29 14:06:15.886386 | orchestrator -> localhost | changed 2025-08-29 14:06:15.910122 | 2025-08-29 14:06:15.910276 | TASK [add-build-sshkey : Add back temp key] 2025-08-29 14:06:16.298001 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/3a100108136040079abb46831c0215f4/work/3a100108136040079abb46831c0215f4_id_rsa (zuul-build-sshkey) 2025-08-29 14:06:16.298263 | orchestrator -> localhost | ok: Runtime: 0:00:00.013334 2025-08-29 14:06:16.305844 | 2025-08-29 14:06:16.305978 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-08-29 14:06:16.869810 | orchestrator | ok 2025-08-29 14:06:16.875913 | 2025-08-29 14:06:16.876031 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-08-29 14:06:16.928683 | orchestrator | skipping: Conditional result was False 2025-08-29 14:06:17.071949 | 2025-08-29 14:06:17.072149 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-08-29 14:06:17.444134 | orchestrator | ok 2025-08-29 14:06:17.462518 | 2025-08-29 14:06:17.462684 | TASK [validate-host : Define zuul_info_dir fact] 2025-08-29 14:06:17.516815 | orchestrator | ok 2025-08-29 14:06:17.529003 | 2025-08-29 14:06:17.529157 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-08-29 14:06:18.066880 | orchestrator -> localhost | ok 2025-08-29 14:06:18.083869 | 2025-08-29 14:06:18.084021 | TASK [validate-host : Collect information about the host] 2025-08-29 14:06:19.348257 | orchestrator | ok 2025-08-29 14:06:19.367732 | 2025-08-29 14:06:19.367843 | TASK [validate-host : Sanitize hostname] 2025-08-29 14:06:19.415515 | orchestrator | ok 2025-08-29 14:06:19.420601 | 2025-08-29 14:06:19.420696 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-08-29 14:06:19.994263 | orchestrator -> localhost | changed 2025-08-29 14:06:20.000199 | 2025-08-29 14:06:20.000293 | TASK [validate-host : Collect information about zuul worker] 2025-08-29 14:06:20.386280 | orchestrator | ok 2025-08-29 14:06:20.395483 | 2025-08-29 14:06:20.395590 | TASK [validate-host : Write out all zuul information for each host] 2025-08-29 14:06:21.290074 | orchestrator -> localhost | changed 2025-08-29 14:06:21.317752 | 2025-08-29 14:06:21.317873 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-08-29 14:06:21.623146 | orchestrator | ok 2025-08-29 14:06:21.628797 | 2025-08-29 14:06:21.628889 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-08-29 14:07:07.655177 | orchestrator | changed: 2025-08-29 14:07:07.655412 | orchestrator | .d..t...... src/ 2025-08-29 14:07:07.655473 | orchestrator | .d..t...... src/github.com/ 2025-08-29 14:07:07.655500 | orchestrator | .d..t...... src/github.com/osism/ 2025-08-29 14:07:07.655522 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-08-29 14:07:07.655543 | orchestrator | RedHat.yml 2025-08-29 14:07:07.677761 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-08-29 14:07:07.677779 | orchestrator | RedHat.yml 2025-08-29 14:07:07.677832 | orchestrator | = 1.53.0"... 2025-08-29 14:07:19.969029 | orchestrator | 14:07:19.968 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-08-29 14:07:20.124420 | orchestrator | 14:07:20.124 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-08-29 14:07:20.600685 | orchestrator | 14:07:20.600 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-08-29 14:07:20.676312 | orchestrator | 14:07:20.676 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.3.2... 2025-08-29 14:07:21.434376 | orchestrator | 14:07:21.434 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.3.2 (signed, key ID 4F80527A391BEFD2) 2025-08-29 14:07:21.505485 | orchestrator | 14:07:21.505 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-08-29 14:07:21.980886 | orchestrator | 14:07:21.980 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-08-29 14:07:21.980955 | orchestrator | 14:07:21.980 STDOUT terraform: Providers are signed by their developers. 2025-08-29 14:07:21.980965 | orchestrator | 14:07:21.980 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-08-29 14:07:21.980972 | orchestrator | 14:07:21.980 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-08-29 14:07:21.981024 | orchestrator | 14:07:21.980 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-08-29 14:07:21.981153 | orchestrator | 14:07:21.980 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-08-29 14:07:21.981165 | orchestrator | 14:07:21.981 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-08-29 14:07:21.981171 | orchestrator | 14:07:21.981 STDOUT terraform: you run "tofu init" in the future. 2025-08-29 14:07:21.981222 | orchestrator | 14:07:21.981 STDOUT terraform: OpenTofu has been successfully initialized! 2025-08-29 14:07:21.981361 | orchestrator | 14:07:21.981 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-08-29 14:07:21.981449 | orchestrator | 14:07:21.981 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-08-29 14:07:21.981458 | orchestrator | 14:07:21.981 STDOUT terraform: should now work. 2025-08-29 14:07:21.981463 | orchestrator | 14:07:21.981 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-08-29 14:07:21.981467 | orchestrator | 14:07:21.981 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-08-29 14:07:21.981493 | orchestrator | 14:07:21.981 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-08-29 14:07:22.099836 | orchestrator | 14:07:22.098 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed03/terraform` instead. 2025-08-29 14:07:22.099884 | orchestrator | 14:07:22.098 WARN  The `workspace` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- workspace` instead. 2025-08-29 14:07:22.612248 | orchestrator | 14:07:22.612 STDOUT terraform: Created and switched to workspace "ci"! 2025-08-29 14:07:22.612343 | orchestrator | 14:07:22.612 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-08-29 14:07:22.612360 | orchestrator | 14:07:22.612 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-08-29 14:07:22.612369 | orchestrator | 14:07:22.612 STDOUT terraform: for this configuration. 2025-08-29 14:07:22.768849 | orchestrator | 14:07:22.768 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed03/terraform` instead. 2025-08-29 14:07:22.769014 | orchestrator | 14:07:22.768 WARN  The `fmt` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- fmt` instead. 2025-08-29 14:07:22.869399 | orchestrator | 14:07:22.869 STDOUT terraform: ci.auto.tfvars 2025-08-29 14:07:23.259357 | orchestrator | 14:07:23.259 STDOUT terraform: default_custom.tf 2025-08-29 14:07:24.344780 | orchestrator | 14:07:24.344 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed03/terraform` instead. 2025-08-29 14:07:25.326093 | orchestrator | 14:07:25.324 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-08-29 14:07:25.836507 | orchestrator | 14:07:25.836 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-08-29 14:07:26.214189 | orchestrator | 14:07:26.213 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-08-29 14:07:26.214259 | orchestrator | 14:07:26.214 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-08-29 14:07:26.214266 | orchestrator | 14:07:26.214 STDOUT terraform:  + create 2025-08-29 14:07:26.214272 | orchestrator | 14:07:26.214 STDOUT terraform:  <= read (data resources) 2025-08-29 14:07:26.214278 | orchestrator | 14:07:26.214 STDOUT terraform: OpenTofu will perform the following actions: 2025-08-29 14:07:26.214590 | orchestrator | 14:07:26.214 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-08-29 14:07:26.214605 | orchestrator | 14:07:26.214 STDOUT terraform:  # (config refers to values not yet known) 2025-08-29 14:07:26.214640 | orchestrator | 14:07:26.214 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-08-29 14:07:26.214668 | orchestrator | 14:07:26.214 STDOUT terraform:  + checksum = (known after apply) 2025-08-29 14:07:26.214698 | orchestrator | 14:07:26.214 STDOUT terraform:  + created_at = (known after apply) 2025-08-29 14:07:26.214765 | orchestrator | 14:07:26.214 STDOUT terraform:  + file = (known after apply) 2025-08-29 14:07:26.214771 | orchestrator | 14:07:26.214 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.214777 | orchestrator | 14:07:26.214 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:07:26.214807 | orchestrator | 14:07:26.214 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-08-29 14:07:26.214838 | orchestrator | 14:07:26.214 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-08-29 14:07:26.214859 | orchestrator | 14:07:26.214 STDOUT terraform:  + most_recent = true 2025-08-29 14:07:26.214887 | orchestrator | 14:07:26.214 STDOUT terraform:  + name = (known after apply) 2025-08-29 14:07:26.214914 | orchestrator | 14:07:26.214 STDOUT terraform:  + protected = (known after apply) 2025-08-29 14:07:26.214959 | orchestrator | 14:07:26.214 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.214987 | orchestrator | 14:07:26.214 STDOUT terraform:  + schema = (known after apply) 2025-08-29 14:07:26.215015 | orchestrator | 14:07:26.214 STDOUT terraform:  + size_bytes = (known after apply) 2025-08-29 14:07:26.215044 | orchestrator | 14:07:26.215 STDOUT terraform:  + tags = (known after apply) 2025-08-29 14:07:26.215072 | orchestrator | 14:07:26.215 STDOUT terraform:  + updated_at = (known after apply) 2025-08-29 14:07:26.216042 | orchestrator | 14:07:26.215 STDOUT terraform:  } 2025-08-29 14:07:26.216108 | orchestrator | 14:07:26.215 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-08-29 14:07:26.216118 | orchestrator | 14:07:26.215 STDOUT terraform:  # (config refers to values not yet known) 2025-08-29 14:07:26.216126 | orchestrator | 14:07:26.215 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-08-29 14:07:26.216134 | orchestrator | 14:07:26.215 STDOUT terraform:  + checksum = (known after apply) 2025-08-29 14:07:26.216140 | orchestrator | 14:07:26.215 STDOUT terraform:  + created_at = (known after apply) 2025-08-29 14:07:26.216147 | orchestrator | 14:07:26.215 STDOUT terraform:  + file = (known after apply) 2025-08-29 14:07:26.216154 | orchestrator | 14:07:26.215 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.216160 | orchestrator | 14:07:26.215 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:07:26.216167 | orchestrator | 14:07:26.215 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-08-29 14:07:26.216174 | orchestrator | 14:07:26.215 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-08-29 14:07:26.216189 | orchestrator | 14:07:26.215 STDOUT terraform:  + most_recent = true 2025-08-29 14:07:26.216196 | orchestrator | 14:07:26.215 STDOUT terraform:  + name = (known after apply) 2025-08-29 14:07:26.216203 | orchestrator | 14:07:26.215 STDOUT terraform:  + protected = (known after apply) 2025-08-29 14:07:26.216209 | orchestrator | 14:07:26.215 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.216216 | orchestrator | 14:07:26.215 STDOUT terraform:  + schema = (known after apply) 2025-08-29 14:07:26.216223 | orchestrator | 14:07:26.215 STDOUT terraform:  + size_bytes = (known after apply) 2025-08-29 14:07:26.216229 | orchestrator | 14:07:26.215 STDOUT terraform:  + tags = (known after apply) 2025-08-29 14:07:26.216236 | orchestrator | 14:07:26.215 STDOUT terraform:  + updated_at = (known after apply) 2025-08-29 14:07:26.216243 | orchestrator | 14:07:26.215 STDOUT terraform:  } 2025-08-29 14:07:26.216249 | orchestrator | 14:07:26.215 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-08-29 14:07:26.216269 | orchestrator | 14:07:26.215 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-08-29 14:07:26.216276 | orchestrator | 14:07:26.215 STDOUT terraform:  + content = (known after apply) 2025-08-29 14:07:26.216283 | orchestrator | 14:07:26.215 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-08-29 14:07:26.216290 | orchestrator | 14:07:26.215 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-08-29 14:07:26.216296 | orchestrator | 14:07:26.215 STDOUT terraform:  + content_md5 = (known after apply) 2025-08-29 14:07:26.216303 | orchestrator | 14:07:26.215 STDOUT terraform:  + content_sha1 = (known after apply) 2025-08-29 14:07:26.216310 | orchestrator | 14:07:26.215 STDOUT terraform:  + content_sha256 = (known after apply) 2025-08-29 14:07:26.216317 | orchestrator | 14:07:26.215 STDOUT terraform:  + content_sha512 = (known after apply) 2025-08-29 14:07:26.216324 | orchestrator | 14:07:26.215 STDOUT terraform:  + directory_permission = "0777" 2025-08-29 14:07:26.216330 | orchestrator | 14:07:26.215 STDOUT terraform:  + file_permission = "0644" 2025-08-29 14:07:26.216337 | orchestrator | 14:07:26.215 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-08-29 14:07:26.216344 | orchestrator | 14:07:26.215 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.216350 | orchestrator | 14:07:26.215 STDOUT terraform:  } 2025-08-29 14:07:26.216366 | orchestrator | 14:07:26.215 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-08-29 14:07:26.216373 | orchestrator | 14:07:26.216 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-08-29 14:07:26.216379 | orchestrator | 14:07:26.216 STDOUT terraform:  + content = (known after apply) 2025-08-29 14:07:26.216386 | orchestrator | 14:07:26.216 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-08-29 14:07:26.216393 | orchestrator | 14:07:26.216 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-08-29 14:07:26.216400 | orchestrator | 14:07:26.216 STDOUT terraform:  + content_md5 = (known after apply) 2025-08-29 14:07:26.216406 | orchestrator | 14:07:26.216 STDOUT terraform:  + content_sha1 = (known after apply) 2025-08-29 14:07:26.216413 | orchestrator | 14:07:26.216 STDOUT terraform:  + content_sha256 = (known after apply) 2025-08-29 14:07:26.216419 | orchestrator | 14:07:26.216 STDOUT terraform:  + content_sha512 = (known after apply) 2025-08-29 14:07:26.216426 | orchestrator | 14:07:26.216 STDOUT terraform:  + directory_permission = "0777" 2025-08-29 14:07:26.216433 | orchestrator | 14:07:26.216 STDOUT terraform:  + file_permission = "0644" 2025-08-29 14:07:26.216439 | orchestrator | 14:07:26.216 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-08-29 14:07:26.216446 | orchestrator | 14:07:26.216 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.216453 | orchestrator | 14:07:26.216 STDOUT terraform:  } 2025-08-29 14:07:26.216503 | orchestrator | 14:07:26.216 STDOUT terraform:  # local_file.inventory will be created 2025-08-29 14:07:26.216558 | orchestrator | 14:07:26.216 STDOUT terraform:  + resource "local_file" "inventory" { 2025-08-29 14:07:26.216567 | orchestrator | 14:07:26.216 STDOUT terraform:  + content = (known after apply) 2025-08-29 14:07:26.216601 | orchestrator | 14:07:26.216 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-08-29 14:07:26.216634 | orchestrator | 14:07:26.216 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-08-29 14:07:26.216670 | orchestrator | 14:07:26.216 STDOUT terraform:  + content_md5 = (known after apply) 2025-08-29 14:07:26.216709 | orchestrator | 14:07:26.216 STDOUT terraform:  + content_sha1 = (known after apply) 2025-08-29 14:07:26.216742 | orchestrator | 14:07:26.216 STDOUT terraform:  + content_sha256 = (known after apply) 2025-08-29 14:07:26.216782 | orchestrator | 14:07:26.216 STDOUT terraform:  + content_sha512 = (known after apply) 2025-08-29 14:07:26.216813 | orchestrator | 14:07:26.216 STDOUT terraform:  + directory_permission = "0777" 2025-08-29 14:07:26.216838 | orchestrator | 14:07:26.216 STDOUT terraform:  + file_permission = "0644" 2025-08-29 14:07:26.216875 | orchestrator | 14:07:26.216 STDOUT terraform:  + filename = "inventory.ci" 2025-08-29 14:07:26.216918 | orchestrator | 14:07:26.216 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.216929 | orchestrator | 14:07:26.216 STDOUT terraform:  } 2025-08-29 14:07:26.217016 | orchestrator | 14:07:26.216 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-08-29 14:07:26.217026 | orchestrator | 14:07:26.216 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-08-29 14:07:26.217037 | orchestrator | 14:07:26.217 STDOUT terraform:  + content = (sensitive value) 2025-08-29 14:07:26.217074 | orchestrator | 14:07:26.217 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-08-29 14:07:26.217107 | orchestrator | 14:07:26.217 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-08-29 14:07:26.217145 | orchestrator | 14:07:26.217 STDOUT terraform:  + content_md5 = (known after apply) 2025-08-29 14:07:26.217181 | orchestrator | 14:07:26.217 STDOUT terraform:  + content_sha1 = (known after apply) 2025-08-29 14:07:26.217215 | orchestrator | 14:07:26.217 STDOUT terraform:  + content_sha256 = (known after apply) 2025-08-29 14:07:26.217247 | orchestrator | 14:07:26.217 STDOUT terraform:  + content_sha512 = (known after apply) 2025-08-29 14:07:26.217271 | orchestrator | 14:07:26.217 STDOUT terraform:  + directory_permission = "0700" 2025-08-29 14:07:26.217295 | orchestrator | 14:07:26.217 STDOUT terraform:  + file_permission = "0600" 2025-08-29 14:07:26.217328 | orchestrator | 14:07:26.217 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-08-29 14:07:26.217369 | orchestrator | 14:07:26.217 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.217379 | orchestrator | 14:07:26.217 STDOUT terraform:  } 2025-08-29 14:07:26.217408 | orchestrator | 14:07:26.217 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-08-29 14:07:26.217438 | orchestrator | 14:07:26.217 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-08-29 14:07:26.217462 | orchestrator | 14:07:26.217 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.217471 | orchestrator | 14:07:26.217 STDOUT terraform:  } 2025-08-29 14:07:26.217560 | orchestrator | 14:07:26.217 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-08-29 14:07:26.217584 | orchestrator | 14:07:26.217 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-08-29 14:07:26.217618 | orchestrator | 14:07:26.217 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 14:07:26.217641 | orchestrator | 14:07:26.217 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:07:26.217682 | orchestrator | 14:07:26.217 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.217718 | orchestrator | 14:07:26.217 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 14:07:26.217820 | orchestrator | 14:07:26.217 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:07:26.217830 | orchestrator | 14:07:26.217 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-08-29 14:07:26.218112 | orchestrator | 14:07:26.217 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.218417 | orchestrator | 14:07:26.218 STDOUT terraform:  + size = 80 2025-08-29 14:07:26.218846 | orchestrator | 14:07:26.218 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 14:07:26.219165 | orchestrator | 14:07:26.218 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 14:07:26.219420 | orchestrator | 14:07:26.219 STDOUT terraform:  } 2025-08-29 14:07:26.220481 | orchestrator | 14:07:26.219 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-08-29 14:07:26.221240 | orchestrator | 14:07:26.220 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-08-29 14:07:26.221895 | orchestrator | 14:07:26.221 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 14:07:26.222112 | orchestrator | 14:07:26.221 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:07:26.222288 | orchestrator | 14:07:26.222 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.222866 | orchestrator | 14:07:26.222 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 14:07:26.223261 | orchestrator | 14:07:26.222 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:07:26.223594 | orchestrator | 14:07:26.223 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-08-29 14:07:26.224411 | orchestrator | 14:07:26.223 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.224613 | orchestrator | 14:07:26.224 STDOUT terraform:  + size = 80 2025-08-29 14:07:26.224864 | orchestrator | 14:07:26.224 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 14:07:26.225242 | orchestrator | 14:07:26.224 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 14:07:26.225405 | orchestrator | 14:07:26.225 STDOUT terraform:  } 2025-08-29 14:07:26.225584 | orchestrator | 14:07:26.225 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-08-29 14:07:26.226045 | orchestrator | 14:07:26.225 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-08-29 14:07:26.226234 | orchestrator | 14:07:26.226 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 14:07:26.226477 | orchestrator | 14:07:26.226 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:07:26.228384 | orchestrator | 14:07:26.226 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.228561 | orchestrator | 14:07:26.228 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 14:07:26.228579 | orchestrator | 14:07:26.228 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:07:26.228744 | orchestrator | 14:07:26.228 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-08-29 14:07:26.228893 | orchestrator | 14:07:26.228 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.229342 | orchestrator | 14:07:26.228 STDOUT terraform:  + size = 80 2025-08-29 14:07:26.229503 | orchestrator | 14:07:26.229 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 14:07:26.229745 | orchestrator | 14:07:26.229 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 14:07:26.229827 | orchestrator | 14:07:26.229 STDOUT terraform:  } 2025-08-29 14:07:26.230271 | orchestrator | 14:07:26.229 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-08-29 14:07:26.230360 | orchestrator | 14:07:26.230 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-08-29 14:07:26.230397 | orchestrator | 14:07:26.230 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 14:07:26.230430 | orchestrator | 14:07:26.230 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:07:26.230461 | orchestrator | 14:07:26.230 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.230496 | orchestrator | 14:07:26.230 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 14:07:26.230577 | orchestrator | 14:07:26.230 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:07:26.230592 | orchestrator | 14:07:26.230 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-08-29 14:07:26.230618 | orchestrator | 14:07:26.230 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.230631 | orchestrator | 14:07:26.230 STDOUT terraform:  + size = 80 2025-08-29 14:07:26.230659 | orchestrator | 14:07:26.230 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 14:07:26.230672 | orchestrator | 14:07:26.230 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 14:07:26.230683 | orchestrator | 14:07:26.230 STDOUT terraform:  } 2025-08-29 14:07:26.230732 | orchestrator | 14:07:26.230 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-08-29 14:07:26.230775 | orchestrator | 14:07:26.230 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-08-29 14:07:26.230809 | orchestrator | 14:07:26.230 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 14:07:26.230834 | orchestrator | 14:07:26.230 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:07:26.230872 | orchestrator | 14:07:26.230 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.230918 | orchestrator | 14:07:26.230 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 14:07:26.230953 | orchestrator | 14:07:26.230 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:07:26.230997 | orchestrator | 14:07:26.230 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-08-29 14:07:26.231032 | orchestrator | 14:07:26.230 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.231045 | orchestrator | 14:07:26.231 STDOUT terraform:  + size = 80 2025-08-29 14:07:26.231074 | orchestrator | 14:07:26.231 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 14:07:26.231087 | orchestrator | 14:07:26.231 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 14:07:26.231099 | orchestrator | 14:07:26.231 STDOUT terraform:  } 2025-08-29 14:07:26.231149 | orchestrator | 14:07:26.231 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-08-29 14:07:26.231198 | orchestrator | 14:07:26.231 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-08-29 14:07:26.231226 | orchestrator | 14:07:26.231 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 14:07:26.231238 | orchestrator | 14:07:26.231 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:07:26.231283 | orchestrator | 14:07:26.231 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.231317 | orchestrator | 14:07:26.231 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 14:07:26.231352 | orchestrator | 14:07:26.231 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:07:26.231394 | orchestrator | 14:07:26.231 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-08-29 14:07:26.231431 | orchestrator | 14:07:26.231 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.231444 | orchestrator | 14:07:26.231 STDOUT terraform:  + size = 80 2025-08-29 14:07:26.231469 | orchestrator | 14:07:26.231 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 14:07:26.231481 | orchestrator | 14:07:26.231 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 14:07:26.231493 | orchestrator | 14:07:26.231 STDOUT terraform:  } 2025-08-29 14:07:26.231555 | orchestrator | 14:07:26.231 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-08-29 14:07:26.231596 | orchestrator | 14:07:26.231 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-08-29 14:07:26.231631 | orchestrator | 14:07:26.231 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 14:07:26.231644 | orchestrator | 14:07:26.231 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:07:26.231684 | orchestrator | 14:07:26.231 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.231720 | orchestrator | 14:07:26.231 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 14:07:26.231754 | orchestrator | 14:07:26.231 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:07:26.231798 | orchestrator | 14:07:26.231 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-08-29 14:07:26.231832 | orchestrator | 14:07:26.231 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.231846 | orchestrator | 14:07:26.231 STDOUT terraform:  + size = 80 2025-08-29 14:07:26.231865 | orchestrator | 14:07:26.231 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 14:07:26.231898 | orchestrator | 14:07:26.231 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 14:07:26.231912 | orchestrator | 14:07:26.231 STDOUT terraform:  } 2025-08-29 14:07:26.231954 | orchestrator | 14:07:26.231 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-08-29 14:07:26.231996 | orchestrator | 14:07:26.231 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 14:07:26.232034 | orchestrator | 14:07:26.231 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 14:07:26.232048 | orchestrator | 14:07:26.232 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:07:26.232084 | orchestrator | 14:07:26.232 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.232120 | orchestrator | 14:07:26.232 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:07:26.232157 | orchestrator | 14:07:26.232 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-08-29 14:07:26.232190 | orchestrator | 14:07:26.232 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.232202 | orchestrator | 14:07:26.232 STDOUT terraform:  + size = 20 2025-08-29 14:07:26.232229 | orchestrator | 14:07:26.232 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 14:07:26.232242 | orchestrator | 14:07:26.232 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 14:07:26.232253 | orchestrator | 14:07:26.232 STDOUT terraform:  } 2025-08-29 14:07:26.232382 | orchestrator | 14:07:26.232 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-08-29 14:07:26.232422 | orchestrator | 14:07:26.232 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 14:07:26.232457 | orchestrator | 14:07:26.232 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 14:07:26.232471 | orchestrator | 14:07:26.232 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:07:26.232526 | orchestrator | 14:07:26.232 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.232568 | orchestrator | 14:07:26.232 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:07:26.232609 | orchestrator | 14:07:26.232 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-08-29 14:07:26.232644 | orchestrator | 14:07:26.232 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.232661 | orchestrator | 14:07:26.232 STDOUT terraform:  + size = 20 2025-08-29 14:07:26.232673 | orchestrator | 14:07:26.232 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 14:07:26.232703 | orchestrator | 14:07:26.232 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 14:07:26.232716 | orchestrator | 14:07:26.232 STDOUT terraform:  } 2025-08-29 14:07:26.232756 | orchestrator | 14:07:26.232 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-08-29 14:07:26.232798 | orchestrator | 14:07:26.232 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 14:07:26.232832 | orchestrator | 14:07:26.232 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 14:07:26.232853 | orchestrator | 14:07:26.232 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:07:26.232886 | orchestrator | 14:07:26.232 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.232921 | orchestrator | 14:07:26.232 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:07:26.232959 | orchestrator | 14:07:26.232 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-08-29 14:07:26.232995 | orchestrator | 14:07:26.232 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.233009 | orchestrator | 14:07:26.232 STDOUT terraform:  + size = 20 2025-08-29 14:07:26.233039 | orchestrator | 14:07:26.233 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 14:07:26.233052 | orchestrator | 14:07:26.233 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 14:07:26.233063 | orchestrator | 14:07:26.233 STDOUT terraform:  } 2025-08-29 14:07:26.233110 | orchestrator | 14:07:26.233 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-08-29 14:07:26.233150 | orchestrator | 14:07:26.233 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 14:07:26.233184 | orchestrator | 14:07:26.233 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 14:07:26.233197 | orchestrator | 14:07:26.233 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:07:26.233237 | orchestrator | 14:07:26.233 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.233272 | orchestrator | 14:07:26.233 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:07:26.233309 | orchestrator | 14:07:26.233 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-08-29 14:07:26.233346 | orchestrator | 14:07:26.233 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.233359 | orchestrator | 14:07:26.233 STDOUT terraform:  + size = 20 2025-08-29 14:07:26.233383 | orchestrator | 14:07:26.233 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 14:07:26.233406 | orchestrator | 14:07:26.233 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 14:07:26.233419 | orchestrator | 14:07:26.233 STDOUT terraform:  } 2025-08-29 14:07:26.233461 | orchestrator | 14:07:26.233 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-08-29 14:07:26.233502 | orchestrator | 14:07:26.233 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 14:07:26.233542 | orchestrator | 14:07:26.233 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 14:07:26.233555 | orchestrator | 14:07:26.233 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:07:26.233595 | orchestrator | 14:07:26.233 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.233628 | orchestrator | 14:07:26.233 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:07:26.233667 | orchestrator | 14:07:26.233 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-08-29 14:07:26.233700 | orchestrator | 14:07:26.233 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.233722 | orchestrator | 14:07:26.233 STDOUT terraform:  + size = 20 2025-08-29 14:07:26.233734 | orchestrator | 14:07:26.233 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 14:07:26.233758 | orchestrator | 14:07:26.233 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 14:07:26.233771 | orchestrator | 14:07:26.233 STDOUT terraform:  } 2025-08-29 14:07:26.233813 | orchestrator | 14:07:26.233 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-08-29 14:07:26.233854 | orchestrator | 14:07:26.233 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 14:07:26.233888 | orchestrator | 14:07:26.233 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 14:07:26.233901 | orchestrator | 14:07:26.233 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:07:26.233942 | orchestrator | 14:07:26.233 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.233978 | orchestrator | 14:07:26.233 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:07:26.234013 | orchestrator | 14:07:26.233 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-08-29 14:07:26.239201 | orchestrator | 14:07:26.234 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.239256 | orchestrator | 14:07:26.238 STDOUT terraform:  + size = 20 2025-08-29 14:07:26.239265 | orchestrator | 14:07:26.238 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 14:07:26.239273 | orchestrator | 14:07:26.238 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 14:07:26.239280 | orchestrator | 14:07:26.238 STDOUT terraform:  } 2025-08-29 14:07:26.239286 | orchestrator | 14:07:26.238 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-08-29 14:07:26.239308 | orchestrator | 14:07:26.238 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 14:07:26.239315 | orchestrator | 14:07:26.238 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 14:07:26.239322 | orchestrator | 14:07:26.238 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:07:26.239328 | orchestrator | 14:07:26.238 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.239335 | orchestrator | 14:07:26.238 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:07:26.239342 | orchestrator | 14:07:26.238 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-08-29 14:07:26.239349 | orchestrator | 14:07:26.238 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.239355 | orchestrator | 14:07:26.238 STDOUT terraform:  + size = 20 2025-08-29 14:07:26.239362 | orchestrator | 14:07:26.238 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 14:07:26.239369 | orchestrator | 14:07:26.238 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 14:07:26.239375 | orchestrator | 14:07:26.238 STDOUT terraform:  } 2025-08-29 14:07:26.239382 | orchestrator | 14:07:26.238 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-08-29 14:07:26.239389 | orchestrator | 14:07:26.238 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 14:07:26.239407 | orchestrator | 14:07:26.238 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 14:07:26.239414 | orchestrator | 14:07:26.238 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:07:26.239421 | orchestrator | 14:07:26.238 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.239428 | orchestrator | 14:07:26.238 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:07:26.239434 | orchestrator | 14:07:26.238 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-08-29 14:07:26.239441 | orchestrator | 14:07:26.238 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.239448 | orchestrator | 14:07:26.238 STDOUT terraform:  + size = 20 2025-08-29 14:07:26.239454 | orchestrator | 14:07:26.238 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 14:07:26.239461 | orchestrator | 14:07:26.238 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 14:07:26.239468 | orchestrator | 14:07:26.238 STDOUT terraform:  } 2025-08-29 14:07:26.239475 | orchestrator | 14:07:26.238 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-08-29 14:07:26.239482 | orchestrator | 14:07:26.238 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-08-29 14:07:26.239488 | orchestrator | 14:07:26.238 STDOUT terraform:  + attachment = (known after apply) 2025-08-29 14:07:26.239495 | orchestrator | 14:07:26.238 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:07:26.239502 | orchestrator | 14:07:26.238 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.239508 | orchestrator | 14:07:26.238 STDOUT terraform:  + metadata = (known after apply) 2025-08-29 14:07:26.239529 | orchestrator | 14:07:26.238 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-08-29 14:07:26.239536 | orchestrator | 14:07:26.238 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.239557 | orchestrator | 14:07:26.239 STDOUT terraform:  + size = 20 2025-08-29 14:07:26.239564 | orchestrator | 14:07:26.239 STDOUT terraform:  + volume_retype_policy = "never" 2025-08-29 14:07:26.239571 | orchestrator | 14:07:26.239 STDOUT terraform:  + volume_type = "ssd" 2025-08-29 14:07:26.239578 | orchestrator | 14:07:26.239 STDOUT terraform:  } 2025-08-29 14:07:26.239588 | orchestrator | 14:07:26.239 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-08-29 14:07:26.239596 | orchestrator | 14:07:26.239 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-08-29 14:07:26.239602 | orchestrator | 14:07:26.239 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-08-29 14:07:26.239609 | orchestrator | 14:07:26.239 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-08-29 14:07:26.239616 | orchestrator | 14:07:26.239 STDOUT terraform:  + all_metadata = (known after apply) 2025-08-29 14:07:26.239622 | orchestrator | 14:07:26.239 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:07:26.239629 | orchestrator | 14:07:26.239 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:07:26.239641 | orchestrator | 14:07:26.239 STDOUT terraform:  + config_drive = true 2025-08-29 14:07:26.239648 | orchestrator | 14:07:26.239 STDOUT terraform:  + created = (known after apply) 2025-08-29 14:07:26.239654 | orchestrator | 14:07:26.239 STDOUT terraform:  + flavor_id = (known after apply) 2025-08-29 14:07:26.239661 | orchestrator | 14:07:26.239 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-08-29 14:07:26.239668 | orchestrator | 14:07:26.239 STDOUT terraform:  + force_delete = false 2025-08-29 14:07:26.239675 | orchestrator | 14:07:26.239 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-08-29 14:07:26.239681 | orchestrator | 14:07:26.239 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.239688 | orchestrator | 14:07:26.239 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 14:07:26.239695 | orchestrator | 14:07:26.239 STDOUT terraform:  + image_name = (known after apply) 2025-08-29 14:07:26.239704 | orchestrator | 14:07:26.239 STDOUT terraform:  + key_pair = "testbed" 2025-08-29 14:07:26.239711 | orchestrator | 14:07:26.239 STDOUT terraform:  + name = "testbed-manager" 2025-08-29 14:07:26.239718 | orchestrator | 14:07:26.239 STDOUT terraform:  + power_state = "active" 2025-08-29 14:07:26.239724 | orchestrator | 14:07:26.239 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.239731 | orchestrator | 14:07:26.239 STDOUT terraform:  + security_groups = (known after apply) 2025-08-29 14:07:26.239738 | orchestrator | 14:07:26.239 STDOUT terraform:  + stop_before_destroy = false 2025-08-29 14:07:26.239747 | orchestrator | 14:07:26.239 STDOUT terraform:  + updated = (known after apply) 2025-08-29 14:07:26.239754 | orchestrator | 14:07:26.239 STDOUT terraform:  + user_data = (sensitive value) 2025-08-29 14:07:26.239763 | orchestrator | 14:07:26.239 STDOUT terraform:  + block_device { 2025-08-29 14:07:26.239772 | orchestrator | 14:07:26.239 STDOUT terraform:  + boot_index = 0 2025-08-29 14:07:26.239805 | orchestrator | 14:07:26.239 STDOUT terraform:  + delete_on_termination = false 2025-08-29 14:07:26.239832 | orchestrator | 14:07:26.239 STDOUT terraform:  + destination_type = "volume" 2025-08-29 14:07:26.239859 | orchestrator | 14:07:26.239 STDOUT terraform:  + multiattach = false 2025-08-29 14:07:26.239888 | orchestrator | 14:07:26.239 STDOUT terraform:  + source_type = "volume" 2025-08-29 14:07:26.239926 | orchestrator | 14:07:26.239 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 14:07:26.239936 | orchestrator | 14:07:26.239 STDOUT terraform:  } 2025-08-29 14:07:26.239945 | orchestrator | 14:07:26.239 STDOUT terraform:  + network { 2025-08-29 14:07:26.239964 | orchestrator | 14:07:26.239 STDOUT terraform:  + access_network = false 2025-08-29 14:07:26.239995 | orchestrator | 14:07:26.239 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-08-29 14:07:26.240024 | orchestrator | 14:07:26.239 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-08-29 14:07:26.240056 | orchestrator | 14:07:26.240 STDOUT terraform:  + mac = (known after apply) 2025-08-29 14:07:26.240086 | orchestrator | 14:07:26.240 STDOUT terraform:  + name = (known after apply) 2025-08-29 14:07:26.240117 | orchestrator | 14:07:26.240 STDOUT terraform:  + port = (known after apply) 2025-08-29 14:07:26.240149 | orchestrator | 14:07:26.240 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 14:07:26.240159 | orchestrator | 14:07:26.240 STDOUT terraform:  } 2025-08-29 14:07:26.240169 | orchestrator | 14:07:26.240 STDOUT terraform:  } 2025-08-29 14:07:26.240211 | orchestrator | 14:07:26.240 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-08-29 14:07:26.240252 | orchestrator | 14:07:26.240 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-08-29 14:07:26.240285 | orchestrator | 14:07:26.240 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-08-29 14:07:26.240323 | orchestrator | 14:07:26.240 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-08-29 14:07:26.240352 | orchestrator | 14:07:26.240 STDOUT terraform:  + all_metadata = (known after apply) 2025-08-29 14:07:26.240386 | orchestrator | 14:07:26.240 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:07:26.240409 | orchestrator | 14:07:26.240 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:07:26.240428 | orchestrator | 14:07:26.240 STDOUT terraform:  + config_drive = true 2025-08-29 14:07:26.240465 | orchestrator | 14:07:26.240 STDOUT terraform:  + created = (known after apply) 2025-08-29 14:07:26.240498 | orchestrator | 14:07:26.240 STDOUT terraform:  + flavor_id = (known after apply) 2025-08-29 14:07:26.240539 | orchestrator | 14:07:26.240 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-08-29 14:07:26.240559 | orchestrator | 14:07:26.240 STDOUT terraform:  + force_delete = false 2025-08-29 14:07:26.240593 | orchestrator | 14:07:26.240 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-08-29 14:07:26.240627 | orchestrator | 14:07:26.240 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.240660 | orchestrator | 14:07:26.240 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 14:07:26.240694 | orchestrator | 14:07:26.240 STDOUT terraform:  + image_name = (known after apply) 2025-08-29 14:07:26.240718 | orchestrator | 14:07:26.240 STDOUT terraform:  + key_pair = "testbed" 2025-08-29 14:07:26.240748 | orchestrator | 14:07:26.240 STDOUT terraform:  + name = "testbed-node-0" 2025-08-29 14:07:26.240772 | orchestrator | 14:07:26.240 STDOUT terraform:  + power_state = "active" 2025-08-29 14:07:26.240806 | orchestrator | 14:07:26.240 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.240840 | orchestrator | 14:07:26.240 STDOUT terraform:  + security_groups = (known after apply) 2025-08-29 14:07:26.240862 | orchestrator | 14:07:26.240 STDOUT terraform:  + stop_before_destroy = false 2025-08-29 14:07:26.240896 | orchestrator | 14:07:26.240 STDOUT terraform:  + updated = (known after apply) 2025-08-29 14:07:26.240945 | orchestrator | 14:07:26.240 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-08-29 14:07:26.240956 | orchestrator | 14:07:26.240 STDOUT terraform:  + block_device { 2025-08-29 14:07:26.240980 | orchestrator | 14:07:26.240 STDOUT terraform:  + boot_index = 0 2025-08-29 14:07:26.241008 | orchestrator | 14:07:26.240 STDOUT terraform:  + delete_on_termination = false 2025-08-29 14:07:26.241037 | orchestrator | 14:07:26.241 STDOUT terraform:  + destination_type = "volume" 2025-08-29 14:07:26.241063 | orchestrator | 14:07:26.241 STDOUT terraform:  + multiattach = false 2025-08-29 14:07:26.241093 | orchestrator | 14:07:26.241 STDOUT terraform:  + source_type = "volume" 2025-08-29 14:07:26.241129 | orchestrator | 14:07:26.241 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 14:07:26.241154 | orchestrator | 14:07:26.241 STDOUT terraform:  } 2025-08-29 14:07:26.241165 | orchestrator | 14:07:26.241 STDOUT terraform:  + network { 2025-08-29 14:07:26.241184 | orchestrator | 14:07:26.241 STDOUT terraform:  + access_network = false 2025-08-29 14:07:26.241216 | orchestrator | 14:07:26.241 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-08-29 14:07:26.241249 | orchestrator | 14:07:26.241 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-08-29 14:07:26.241279 | orchestrator | 14:07:26.241 STDOUT terraform:  + mac = (known after apply) 2025-08-29 14:07:26.241309 | orchestrator | 14:07:26.241 STDOUT terraform:  + name = (known after apply) 2025-08-29 14:07:26.241340 | orchestrator | 14:07:26.241 STDOUT terraform:  + port = (known after apply) 2025-08-29 14:07:26.241371 | orchestrator | 14:07:26.241 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 14:07:26.241381 | orchestrator | 14:07:26.241 STDOUT terraform:  } 2025-08-29 14:07:26.241390 | orchestrator | 14:07:26.241 STDOUT terraform:  } 2025-08-29 14:07:26.241448 | orchestrator | 14:07:26.241 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-08-29 14:07:26.241468 | orchestrator | 14:07:26.241 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-08-29 14:07:26.241564 | orchestrator | 14:07:26.241 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-08-29 14:07:26.241574 | orchestrator | 14:07:26.241 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-08-29 14:07:26.241584 | orchestrator | 14:07:26.241 STDOUT terraform:  + all_metadata = (known after apply) 2025-08-29 14:07:26.241607 | orchestrator | 14:07:26.241 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:07:26.241630 | orchestrator | 14:07:26.241 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:07:26.241649 | orchestrator | 14:07:26.241 STDOUT terraform:  + config_drive = true 2025-08-29 14:07:26.241684 | orchestrator | 14:07:26.241 STDOUT terraform:  + created = (known after apply) 2025-08-29 14:07:26.241718 | orchestrator | 14:07:26.241 STDOUT terraform:  + flavor_id = (known after apply) 2025-08-29 14:07:26.241748 | orchestrator | 14:07:26.241 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-08-29 14:07:26.241770 | orchestrator | 14:07:26.241 STDOUT terraform:  + force_delete = false 2025-08-29 14:07:26.241809 | orchestrator | 14:07:26.241 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-08-29 14:07:26.241837 | orchestrator | 14:07:26.241 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.241872 | orchestrator | 14:07:26.241 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 14:07:26.241906 | orchestrator | 14:07:26.241 STDOUT terraform:  + image_name = (known after apply) 2025-08-29 14:07:26.241929 | orchestrator | 14:07:26.241 STDOUT terraform:  + key_pair = "testbed" 2025-08-29 14:07:26.241959 | orchestrator | 14:07:26.241 STDOUT terraform:  + name = "testbed-node-1" 2025-08-29 14:07:26.241981 | orchestrator | 14:07:26.241 STDOUT terraform:  + power_state = "active" 2025-08-29 14:07:26.243106 | orchestrator | 14:07:26.241 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.243126 | orchestrator | 14:07:26.242 STDOUT terraform:  + security_groups = (known after apply) 2025-08-29 14:07:26.243134 | orchestrator | 14:07:26.242 STDOUT terraform:  + stop_before_destroy = false 2025-08-29 14:07:26.243140 | orchestrator | 14:07:26.242 STDOUT terraform:  + updated = (known after apply) 2025-08-29 14:07:26.243682 | orchestrator | 14:07:26.243 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-08-29 14:07:26.243785 | orchestrator | 14:07:26.243 STDOUT terraform:  + block_device { 2025-08-29 14:07:26.244002 | orchestrator | 14:07:26.243 STDOUT terraform:  + boot_index = 0 2025-08-29 14:07:26.244573 | orchestrator | 14:07:26.244 STDOUT terraform:  + delete_on_termination = false 2025-08-29 14:07:26.245097 | orchestrator | 14:07:26.244 STDOUT terraform:  + destination_type = "volume" 2025-08-29 14:07:26.245478 | orchestrator | 14:07:26.245 STDOUT terraform:  + multiattach = false 2025-08-29 14:07:26.245726 | orchestrator | 14:07:26.245 STDOUT terraform:  + source_type = "volume" 2025-08-29 14:07:26.246075 | orchestrator | 14:07:26.245 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 14:07:26.246279 | orchestrator | 14:07:26.246 STDOUT terraform:  } 2025-08-29 14:07:26.246429 | orchestrator | 14:07:26.246 STDOUT terraform:  + network { 2025-08-29 14:07:26.246571 | orchestrator | 14:07:26.246 STDOUT terraform:  + access_network = false 2025-08-29 14:07:26.246920 | orchestrator | 14:07:26.246 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-08-29 14:07:26.247189 | orchestrator | 14:07:26.246 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-08-29 14:07:26.247644 | orchestrator | 14:07:26.247 STDOUT terraform:  + mac = (known after apply) 2025-08-29 14:07:26.248466 | orchestrator | 14:07:26.247 STDOUT terraform:  + name = (known after apply) 2025-08-29 14:07:26.248585 | orchestrator | 14:07:26.248 STDOUT terraform:  + port = (known after apply) 2025-08-29 14:07:26.248622 | orchestrator | 14:07:26.248 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 14:07:26.248639 | orchestrator | 14:07:26.248 STDOUT terraform:  } 2025-08-29 14:07:26.248648 | orchestrator | 14:07:26.248 STDOUT terraform:  } 2025-08-29 14:07:26.248701 | orchestrator | 14:07:26.248 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-08-29 14:07:26.248744 | orchestrator | 14:07:26.248 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-08-29 14:07:26.248778 | orchestrator | 14:07:26.248 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-08-29 14:07:26.248819 | orchestrator | 14:07:26.248 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-08-29 14:07:26.248853 | orchestrator | 14:07:26.248 STDOUT terraform:  + all_metadata = (known after apply) 2025-08-29 14:07:26.248886 | orchestrator | 14:07:26.248 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:07:26.248910 | orchestrator | 14:07:26.248 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:07:26.248930 | orchestrator | 14:07:26.248 STDOUT terraform:  + config_drive = true 2025-08-29 14:07:26.248971 | orchestrator | 14:07:26.248 STDOUT terraform:  + created = (known after apply) 2025-08-29 14:07:26.249011 | orchestrator | 14:07:26.248 STDOUT terraform:  + flavor_id = (known after apply) 2025-08-29 14:07:26.249041 | orchestrator | 14:07:26.249 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-08-29 14:07:26.249064 | orchestrator | 14:07:26.249 STDOUT terraform:  + force_delete = false 2025-08-29 14:07:26.249098 | orchestrator | 14:07:26.249 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-08-29 14:07:26.249134 | orchestrator | 14:07:26.249 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.249173 | orchestrator | 14:07:26.249 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 14:07:26.249208 | orchestrator | 14:07:26.249 STDOUT terraform:  + image_name = (known after apply) 2025-08-29 14:07:26.249232 | orchestrator | 14:07:26.249 STDOUT terraform:  + key_pair = "testbed" 2025-08-29 14:07:26.249268 | orchestrator | 14:07:26.249 STDOUT terraform:  + name = "testbed-node-2" 2025-08-29 14:07:26.249293 | orchestrator | 14:07:26.249 STDOUT terraform:  + power_state = "active" 2025-08-29 14:07:26.249328 | orchestrator | 14:07:26.249 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.249368 | orchestrator | 14:07:26.249 STDOUT terraform:  + security_groups = (known after apply) 2025-08-29 14:07:26.249391 | orchestrator | 14:07:26.249 STDOUT terraform:  + stop_before_destroy = false 2025-08-29 14:07:26.249427 | orchestrator | 14:07:26.249 STDOUT terraform:  + updated = (known after apply) 2025-08-29 14:07:26.249476 | orchestrator | 14:07:26.249 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-08-29 14:07:26.249504 | orchestrator | 14:07:26.249 STDOUT terraform:  + block_device { 2025-08-29 14:07:26.249564 | orchestrator | 14:07:26.249 STDOUT terraform:  + boot_index = 0 2025-08-29 14:07:26.249585 | orchestrator | 14:07:26.249 STDOUT terraform:  + delete_on_termination = false 2025-08-29 14:07:26.249622 | orchestrator | 14:07:26.249 STDOUT terraform:  + destination_type = "volume" 2025-08-29 14:07:26.249642 | orchestrator | 14:07:26.249 STDOUT terraform:  + multiattach = false 2025-08-29 14:07:26.249671 | orchestrator | 14:07:26.249 STDOUT terraform:  + source_type = "volume" 2025-08-29 14:07:26.249708 | orchestrator | 14:07:26.249 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 14:07:26.249723 | orchestrator | 14:07:26.249 STDOUT terraform:  } 2025-08-29 14:07:26.249730 | orchestrator | 14:07:26.249 STDOUT terraform:  + network { 2025-08-29 14:07:26.249750 | orchestrator | 14:07:26.249 STDOUT terraform:  + access_network = false 2025-08-29 14:07:26.249786 | orchestrator | 14:07:26.249 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-08-29 14:07:26.249823 | orchestrator | 14:07:26.249 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-08-29 14:07:26.249854 | orchestrator | 14:07:26.249 STDOUT terraform:  + mac = (known after apply) 2025-08-29 14:07:26.249884 | orchestrator | 14:07:26.249 STDOUT terraform:  + name = (known after apply) 2025-08-29 14:07:26.249920 | orchestrator | 14:07:26.249 STDOUT terraform:  + port = (known after apply) 2025-08-29 14:07:26.249950 | orchestrator | 14:07:26.249 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 14:07:26.249957 | orchestrator | 14:07:26.249 STDOUT terraform:  } 2025-08-29 14:07:26.249975 | orchestrator | 14:07:26.249 STDOUT terraform:  } 2025-08-29 14:07:26.250034 | orchestrator | 14:07:26.249 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-08-29 14:07:26.250072 | orchestrator | 14:07:26.250 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-08-29 14:07:26.250106 | orchestrator | 14:07:26.250 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-08-29 14:07:26.250145 | orchestrator | 14:07:26.250 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-08-29 14:07:26.250180 | orchestrator | 14:07:26.250 STDOUT terraform:  + all_metadata = (known after apply) 2025-08-29 14:07:26.250213 | orchestrator | 14:07:26.250 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:07:26.250236 | orchestrator | 14:07:26.250 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:07:26.250256 | orchestrator | 14:07:26.250 STDOUT terraform:  + config_drive = true 2025-08-29 14:07:26.250297 | orchestrator | 14:07:26.250 STDOUT terraform:  + created = (known after apply) 2025-08-29 14:07:26.250337 | orchestrator | 14:07:26.250 STDOUT terraform:  + flavor_id = (known after apply) 2025-08-29 14:07:26.250375 | orchestrator | 14:07:26.250 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-08-29 14:07:26.250398 | orchestrator | 14:07:26.250 STDOUT terraform:  + force_delete = false 2025-08-29 14:07:26.250431 | orchestrator | 14:07:26.250 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-08-29 14:07:26.250465 | orchestrator | 14:07:26.250 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.250505 | orchestrator | 14:07:26.250 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 14:07:26.250562 | orchestrator | 14:07:26.250 STDOUT terraform:  + image_name = (known after apply) 2025-08-29 14:07:26.250599 | orchestrator | 14:07:26.250 STDOUT terraform:  + key_pair = "testbed" 2025-08-29 14:07:26.250648 | orchestrator | 14:07:26.250 STDOUT terraform:  + name = "testbed-node-3" 2025-08-29 14:07:26.250683 | orchestrator | 14:07:26.250 STDOUT terraform:  + power_state = "active" 2025-08-29 14:07:26.250718 | orchestrator | 14:07:26.250 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.250759 | orchestrator | 14:07:26.250 STDOUT terraform:  + security_groups = (known after apply) 2025-08-29 14:07:26.250783 | orchestrator | 14:07:26.250 STDOUT terraform:  + stop_before_destroy = false 2025-08-29 14:07:26.250828 | orchestrator | 14:07:26.250 STDOUT terraform:  + updated = (known after apply) 2025-08-29 14:07:26.250881 | orchestrator | 14:07:26.250 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-08-29 14:07:26.250891 | orchestrator | 14:07:26.250 STDOUT terraform:  + block_device { 2025-08-29 14:07:26.250930 | orchestrator | 14:07:26.250 STDOUT terraform:  + boot_index = 0 2025-08-29 14:07:26.250980 | orchestrator | 14:07:26.250 STDOUT terraform:  + delete_on_termination = false 2025-08-29 14:07:26.251026 | orchestrator | 14:07:26.250 STDOUT terraform:  + destination_type = "volume" 2025-08-29 14:07:26.251055 | orchestrator | 14:07:26.251 STDOUT terraform:  + multiattach = false 2025-08-29 14:07:26.251083 | orchestrator | 14:07:26.251 STDOUT terraform:  + source_type = "volume" 2025-08-29 14:07:26.251122 | orchestrator | 14:07:26.251 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 14:07:26.251130 | orchestrator | 14:07:26.251 STDOUT terraform:  } 2025-08-29 14:07:26.251155 | orchestrator | 14:07:26.251 STDOUT terraform:  + network { 2025-08-29 14:07:26.251182 | orchestrator | 14:07:26.251 STDOUT terraform:  + access_network = false 2025-08-29 14:07:26.251218 | orchestrator | 14:07:26.251 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-08-29 14:07:26.251247 | orchestrator | 14:07:26.251 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-08-29 14:07:26.251278 | orchestrator | 14:07:26.251 STDOUT terraform:  + mac = (known after apply) 2025-08-29 14:07:26.251308 | orchestrator | 14:07:26.251 STDOUT terraform:  + name = (known after apply) 2025-08-29 14:07:26.251365 | orchestrator | 14:07:26.251 STDOUT terraform:  + port = (known after apply) 2025-08-29 14:07:26.251412 | orchestrator | 14:07:26.251 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 14:07:26.251434 | orchestrator | 14:07:26.251 STDOUT terraform:  } 2025-08-29 14:07:26.251450 | orchestrator | 14:07:26.251 STDOUT terraform:  } 2025-08-29 14:07:26.251498 | orchestrator | 14:07:26.251 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-08-29 14:07:26.251618 | orchestrator | 14:07:26.251 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-08-29 14:07:26.251661 | orchestrator | 14:07:26.251 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-08-29 14:07:26.251697 | orchestrator | 14:07:26.251 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-08-29 14:07:26.251732 | orchestrator | 14:07:26.251 STDOUT terraform:  + all_metadata = (known after apply) 2025-08-29 14:07:26.251767 | orchestrator | 14:07:26.251 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:07:26.251790 | orchestrator | 14:07:26.251 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:07:26.251805 | orchestrator | 14:07:26.251 STDOUT terraform:  + config_drive = true 2025-08-29 14:07:26.251856 | orchestrator | 14:07:26.251 STDOUT terraform:  + created = (known after apply) 2025-08-29 14:07:26.251886 | orchestrator | 14:07:26.251 STDOUT terraform:  + flavor_id = (known after apply) 2025-08-29 14:07:26.251914 | orchestrator | 14:07:26.251 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-08-29 14:07:26.251947 | orchestrator | 14:07:26.251 STDOUT terraform:  + force_delete = false 2025-08-29 14:07:26.251981 | orchestrator | 14:07:26.251 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-08-29 14:07:26.252020 | orchestrator | 14:07:26.251 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.252055 | orchestrator | 14:07:26.252 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 14:07:26.252093 | orchestrator | 14:07:26.252 STDOUT terraform:  + image_name = (known after apply) 2025-08-29 14:07:26.252115 | orchestrator | 14:07:26.252 STDOUT terraform:  + key_pair = "testbed" 2025-08-29 14:07:26.252159 | orchestrator | 14:07:26.252 STDOUT terraform:  + name = "testbed-node-4" 2025-08-29 14:07:26.252198 | orchestrator | 14:07:26.252 STDOUT terraform:  + power_state = "active" 2025-08-29 14:07:26.252237 | orchestrator | 14:07:26.252 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.252284 | orchestrator | 14:07:26.252 STDOUT terraform:  + security_groups = (known after apply) 2025-08-29 14:07:26.252308 | orchestrator | 14:07:26.252 STDOUT terraform:  + stop_before_destroy = false 2025-08-29 14:07:26.252351 | orchestrator | 14:07:26.252 STDOUT terraform:  + updated = (known after apply) 2025-08-29 14:07:26.252407 | orchestrator | 14:07:26.252 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-08-29 14:07:26.252433 | orchestrator | 14:07:26.252 STDOUT terraform:  + block_device { 2025-08-29 14:07:26.252456 | orchestrator | 14:07:26.252 STDOUT terraform:  + boot_index = 0 2025-08-29 14:07:26.252489 | orchestrator | 14:07:26.252 STDOUT terraform:  + delete_on_termination = false 2025-08-29 14:07:26.252562 | orchestrator | 14:07:26.252 STDOUT terraform:  + destination_type = "volume" 2025-08-29 14:07:26.252571 | orchestrator | 14:07:26.252 STDOUT terraform:  + multiattach = false 2025-08-29 14:07:26.252577 | orchestrator | 14:07:26.252 STDOUT terraform:  + source_type = "volume" 2025-08-29 14:07:26.252612 | orchestrator | 14:07:26.252 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 14:07:26.252632 | orchestrator | 14:07:26.252 STDOUT terraform:  } 2025-08-29 14:07:26.252657 | orchestrator | 14:07:26.252 STDOUT terraform:  + network { 2025-08-29 14:07:26.252683 | orchestrator | 14:07:26.252 STDOUT terraform:  + access_network = false 2025-08-29 14:07:26.252714 | orchestrator | 14:07:26.252 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-08-29 14:07:26.252744 | orchestrator | 14:07:26.252 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-08-29 14:07:26.252774 | orchestrator | 14:07:26.252 STDOUT terraform:  + mac = (known after apply) 2025-08-29 14:07:26.252804 | orchestrator | 14:07:26.252 STDOUT terraform:  + name = (known after apply) 2025-08-29 14:07:26.252840 | orchestrator | 14:07:26.252 STDOUT terraform:  + port = (known after apply) 2025-08-29 14:07:26.252871 | orchestrator | 14:07:26.252 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 14:07:26.252877 | orchestrator | 14:07:26.252 STDOUT terraform:  } 2025-08-29 14:07:26.252893 | orchestrator | 14:07:26.252 STDOUT terraform:  } 2025-08-29 14:07:26.252934 | orchestrator | 14:07:26.252 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-08-29 14:07:26.252985 | orchestrator | 14:07:26.252 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-08-29 14:07:26.253026 | orchestrator | 14:07:26.252 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-08-29 14:07:26.253060 | orchestrator | 14:07:26.253 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-08-29 14:07:26.253094 | orchestrator | 14:07:26.253 STDOUT terraform:  + all_metadata = (known after apply) 2025-08-29 14:07:26.253128 | orchestrator | 14:07:26.253 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:07:26.253151 | orchestrator | 14:07:26.253 STDOUT terraform:  + availability_zone = "nova" 2025-08-29 14:07:26.253172 | orchestrator | 14:07:26.253 STDOUT terraform:  + config_drive = true 2025-08-29 14:07:26.253217 | orchestrator | 14:07:26.253 STDOUT terraform:  + created = (known after apply) 2025-08-29 14:07:26.253251 | orchestrator | 14:07:26.253 STDOUT terraform:  + flavor_id = (known after apply) 2025-08-29 14:07:26.253288 | orchestrator | 14:07:26.253 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-08-29 14:07:26.253320 | orchestrator | 14:07:26.253 STDOUT terraform:  + force_delete = false 2025-08-29 14:07:26.253353 | orchestrator | 14:07:26.253 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-08-29 14:07:26.253404 | orchestrator | 14:07:26.253 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.253440 | orchestrator | 14:07:26.253 STDOUT terraform:  + image_id = (known after apply) 2025-08-29 14:07:26.253474 | orchestrator | 14:07:26.253 STDOUT terraform:  + image_name = (known after apply) 2025-08-29 14:07:26.253497 | orchestrator | 14:07:26.253 STDOUT terraform:  + key_pair = "testbed" 2025-08-29 14:07:26.253551 | orchestrator | 14:07:26.253 STDOUT terraform:  + name = "testbed-node-5" 2025-08-29 14:07:26.253576 | orchestrator | 14:07:26.253 STDOUT terraform:  + power_state = "active" 2025-08-29 14:07:26.253610 | orchestrator | 14:07:26.253 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.253994 | orchestrator | 14:07:26.253 STDOUT terraform:  + security_groups = (known after apply) 2025-08-29 14:07:26.254349 | orchestrator | 14:07:26.254 STDOUT terraform:  + stop_before_destroy = false 2025-08-29 14:07:26.254898 | orchestrator | 14:07:26.254 STDOUT terraform:  + updated = (known after apply) 2025-08-29 14:07:26.255500 | orchestrator | 14:07:26.254 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-08-29 14:07:26.255856 | orchestrator | 14:07:26.255 STDOUT terraform:  + block_device { 2025-08-29 14:07:26.256126 | orchestrator | 14:07:26.255 STDOUT terraform:  + boot_index = 0 2025-08-29 14:07:26.256857 | orchestrator | 14:07:26.256 STDOUT terraform:  + delete_on_termination = false 2025-08-29 14:07:26.257309 | orchestrator | 14:07:26.256 STDOUT terraform:  + destination_type = "volume" 2025-08-29 14:07:26.257820 | orchestrator | 14:07:26.257 STDOUT terraform:  + multiattach = false 2025-08-29 14:07:26.258296 | orchestrator | 14:07:26.257 STDOUT terraform:  + source_type = "volume" 2025-08-29 14:07:26.258835 | orchestrator | 14:07:26.258 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 14:07:26.259228 | orchestrator | 14:07:26.258 STDOUT terraform:  } 2025-08-29 14:07:26.259638 | orchestrator | 14:07:26.259 STDOUT terraform:  + network { 2025-08-29 14:07:26.259664 | orchestrator | 14:07:26.259 STDOUT terraform:  + access_network = false 2025-08-29 14:07:26.259695 | orchestrator | 14:07:26.259 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-08-29 14:07:26.259725 | orchestrator | 14:07:26.259 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-08-29 14:07:26.259757 | orchestrator | 14:07:26.259 STDOUT terraform:  + mac = (known after apply) 2025-08-29 14:07:26.259788 | orchestrator | 14:07:26.259 STDOUT terraform:  + name = (known after apply) 2025-08-29 14:07:26.259841 | orchestrator | 14:07:26.259 STDOUT terraform:  + port = (known after apply) 2025-08-29 14:07:26.259865 | orchestrator | 14:07:26.259 STDOUT terraform:  + uuid = (known after apply) 2025-08-29 14:07:26.259879 | orchestrator | 14:07:26.259 STDOUT terraform:  } 2025-08-29 14:07:26.259885 | orchestrator | 14:07:26.259 STDOUT terraform:  } 2025-08-29 14:07:26.259927 | orchestrator | 14:07:26.259 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-08-29 14:07:26.259969 | orchestrator | 14:07:26.259 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-08-29 14:07:26.259996 | orchestrator | 14:07:26.259 STDOUT terraform:  + fingerprint = (known after apply) 2025-08-29 14:07:26.260024 | orchestrator | 14:07:26.259 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.260049 | orchestrator | 14:07:26.260 STDOUT terraform:  + name = "testbed" 2025-08-29 14:07:26.260079 | orchestrator | 14:07:26.260 STDOUT terraform:  + private_key = (sensitive value) 2025-08-29 14:07:26.260107 | orchestrator | 14:07:26.260 STDOUT terraform:  + public_key = (known after apply) 2025-08-29 14:07:26.260134 | orchestrator | 14:07:26.260 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.260164 | orchestrator | 14:07:26.260 STDOUT terraform:  + user_id = (known after apply) 2025-08-29 14:07:26.260171 | orchestrator | 14:07:26.260 STDOUT terraform:  } 2025-08-29 14:07:26.260219 | orchestrator | 14:07:26.260 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-08-29 14:07:26.260288 | orchestrator | 14:07:26.260 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 14:07:26.260323 | orchestrator | 14:07:26.260 STDOUT terraform:  + device = (known after apply) 2025-08-29 14:07:26.260352 | orchestrator | 14:07:26.260 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.260378 | orchestrator | 14:07:26.260 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 14:07:26.260405 | orchestrator | 14:07:26.260 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.260433 | orchestrator | 14:07:26.260 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 14:07:26.260439 | orchestrator | 14:07:26.260 STDOUT terraform:  } 2025-08-29 14:07:26.260489 | orchestrator | 14:07:26.260 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-08-29 14:07:26.260553 | orchestrator | 14:07:26.260 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 14:07:26.260581 | orchestrator | 14:07:26.260 STDOUT terraform:  + device = (known after apply) 2025-08-29 14:07:26.260609 | orchestrator | 14:07:26.260 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.260665 | orchestrator | 14:07:26.260 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 14:07:26.260710 | orchestrator | 14:07:26.260 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.260762 | orchestrator | 14:07:26.260 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 14:07:26.260777 | orchestrator | 14:07:26.260 STDOUT terraform:  } 2025-08-29 14:07:26.260831 | orchestrator | 14:07:26.260 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-08-29 14:07:26.260879 | orchestrator | 14:07:26.260 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 14:07:26.260904 | orchestrator | 14:07:26.260 STDOUT terraform:  + device = (known after apply) 2025-08-29 14:07:26.260933 | orchestrator | 14:07:26.260 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.260961 | orchestrator | 14:07:26.260 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 14:07:26.260992 | orchestrator | 14:07:26.260 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.261031 | orchestrator | 14:07:26.260 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 14:07:26.261038 | orchestrator | 14:07:26.261 STDOUT terraform:  } 2025-08-29 14:07:26.261088 | orchestrator | 14:07:26.261 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-08-29 14:07:26.261140 | orchestrator | 14:07:26.261 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 14:07:26.261178 | orchestrator | 14:07:26.261 STDOUT terraform:  + device = (known after apply) 2025-08-29 14:07:26.261208 | orchestrator | 14:07:26.261 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.261235 | orchestrator | 14:07:26.261 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 14:07:26.261268 | orchestrator | 14:07:26.261 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.261295 | orchestrator | 14:07:26.261 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 14:07:26.261302 | orchestrator | 14:07:26.261 STDOUT terraform:  } 2025-08-29 14:07:26.261353 | orchestrator | 14:07:26.261 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-08-29 14:07:26.262384 | orchestrator | 14:07:26.261 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 14:07:26.262407 | orchestrator | 14:07:26.261 STDOUT terraform:  + device = (known after apply) 2025-08-29 14:07:26.262412 | orchestrator | 14:07:26.261 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.262417 | orchestrator | 14:07:26.261 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 14:07:26.262421 | orchestrator | 14:07:26.261 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.262426 | orchestrator | 14:07:26.261 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 14:07:26.262431 | orchestrator | 14:07:26.261 STDOUT terraform:  } 2025-08-29 14:07:26.262436 | orchestrator | 14:07:26.261 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-08-29 14:07:26.262440 | orchestrator | 14:07:26.261 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 14:07:26.262445 | orchestrator | 14:07:26.261 STDOUT terraform:  + device = (known after apply) 2025-08-29 14:07:26.262450 | orchestrator | 14:07:26.261 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.262454 | orchestrator | 14:07:26.261 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 14:07:26.262459 | orchestrator | 14:07:26.261 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.262463 | orchestrator | 14:07:26.261 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 14:07:26.262468 | orchestrator | 14:07:26.261 STDOUT terraform:  } 2025-08-29 14:07:26.262472 | orchestrator | 14:07:26.261 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-08-29 14:07:26.262477 | orchestrator | 14:07:26.261 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 14:07:26.262482 | orchestrator | 14:07:26.261 STDOUT terraform:  + device = (known after apply) 2025-08-29 14:07:26.262486 | orchestrator | 14:07:26.261 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.262491 | orchestrator | 14:07:26.261 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 14:07:26.262495 | orchestrator | 14:07:26.262 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.262987 | orchestrator | 14:07:26.262 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 14:07:26.263004 | orchestrator | 14:07:26.262 STDOUT terraform:  } 2025-08-29 14:07:26.267726 | orchestrator | 14:07:26.262 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-08-29 14:07:26.267760 | orchestrator | 14:07:26.263 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 14:07:26.267767 | orchestrator | 14:07:26.264 STDOUT terraform:  + device = (known after apply) 2025-08-29 14:07:26.267772 | orchestrator | 14:07:26.264 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.267777 | orchestrator | 14:07:26.265 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 14:07:26.267782 | orchestrator | 14:07:26.265 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.267796 | orchestrator | 14:07:26.265 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 14:07:26.267800 | orchestrator | 14:07:26.266 STDOUT terraform:  } 2025-08-29 14:07:26.267805 | orchestrator | 14:07:26.266 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-08-29 14:07:26.267986 | orchestrator | 14:07:26.267 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-08-29 14:07:26.271351 | orchestrator | 14:07:26.268 STDOUT terraform:  + device = (known after apply) 2025-08-29 14:07:26.275750 | orchestrator | 14:07:26.268 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.275762 | orchestrator | 14:07:26.268 STDOUT terraform:  + instance_id = (known after apply) 2025-08-29 14:07:26.275768 | orchestrator | 14:07:26.269 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.275773 | orchestrator | 14:07:26.269 STDOUT terraform:  + volume_id = (known after apply) 2025-08-29 14:07:26.275778 | orchestrator | 14:07:26.270 STDOUT terraform:  } 2025-08-29 14:07:26.275791 | orchestrator | 14:07:26.270 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-08-29 14:07:26.275797 | orchestrator | 14:07:26.270 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-08-29 14:07:26.275803 | orchestrator | 14:07:26.270 STDOUT terraform:  + fixed_ip = (known after apply) 2025-08-29 14:07:26.275808 | orchestrator | 14:07:26.270 STDOUT terraform:  + floating_ip = (known after apply) 2025-08-29 14:07:26.275813 | orchestrator | 14:07:26.270 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.275818 | orchestrator | 14:07:26.270 STDOUT terraform:  + port_id = (known after apply) 2025-08-29 14:07:26.275823 | orchestrator | 14:07:26.270 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.275828 | orchestrator | 14:07:26.270 STDOUT terraform:  } 2025-08-29 14:07:26.275833 | orchestrator | 14:07:26.270 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-08-29 14:07:26.275838 | orchestrator | 14:07:26.270 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-08-29 14:07:26.275843 | orchestrator | 14:07:26.270 STDOUT terraform:  + address = (known after apply) 2025-08-29 14:07:26.275848 | orchestrator | 14:07:26.270 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:07:26.275853 | orchestrator | 14:07:26.270 STDOUT terraform:  + dns_domain = (known after apply) 2025-08-29 14:07:26.275858 | orchestrator | 14:07:26.270 STDOUT terraform:  + dns_name = (known after apply) 2025-08-29 14:07:26.275863 | orchestrator | 14:07:26.270 STDOUT terraform:  + fixed_ip = (known after apply) 2025-08-29 14:07:26.275868 | orchestrator | 14:07:26.270 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.275873 | orchestrator | 14:07:26.270 STDOUT terraform:  + pool = "public" 2025-08-29 14:07:26.275878 | orchestrator | 14:07:26.270 STDOUT terraform:  + port_id = (known after apply) 2025-08-29 14:07:26.275883 | orchestrator | 14:07:26.270 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.275888 | orchestrator | 14:07:26.270 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 14:07:26.275903 | orchestrator | 14:07:26.270 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:07:26.275908 | orchestrator | 14:07:26.270 STDOUT terraform:  } 2025-08-29 14:07:26.275913 | orchestrator | 14:07:26.270 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-08-29 14:07:26.275918 | orchestrator | 14:07:26.270 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-08-29 14:07:26.275923 | orchestrator | 14:07:26.270 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 14:07:26.275928 | orchestrator | 14:07:26.270 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:07:26.275933 | orchestrator | 14:07:26.270 STDOUT terraform:  + availability_zone_hints = [ 2025-08-29 14:07:26.275938 | orchestrator | 14:07:26.270 STDOUT terraform:  + "nova", 2025-08-29 14:07:26.275943 | orchestrator | 14:07:26.270 STDOUT terraform:  ] 2025-08-29 14:07:26.275948 | orchestrator | 14:07:26.271 STDOUT terraform:  + dns_domain = (known after apply) 2025-08-29 14:07:26.275953 | orchestrator | 14:07:26.271 STDOUT terraform:  + external = (known after apply) 2025-08-29 14:07:26.275967 | orchestrator | 14:07:26.271 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.275973 | orchestrator | 14:07:26.271 STDOUT terraform:  + mtu = (known after apply) 2025-08-29 14:07:26.275978 | orchestrator | 14:07:26.271 STDOUT terraform:  + name = "net-testbed-management" 2025-08-29 14:07:26.282090 | orchestrator | 14:07:26.271 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-08-29 14:07:26.282102 | orchestrator | 14:07:26.276 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-08-29 14:07:26.282107 | orchestrator | 14:07:26.278 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.282116 | orchestrator | 14:07:26.279 STDOUT terraform:  + shared = (known after apply) 2025-08-29 14:07:26.282120 | orchestrator | 14:07:26.279 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:07:26.282124 | orchestrator | 14:07:26.279 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-08-29 14:07:26.282128 | orchestrator | 14:07:26.279 STDOUT terraform:  + segments (known after apply) 2025-08-29 14:07:26.282131 | orchestrator | 14:07:26.279 STDOUT terraform:  } 2025-08-29 14:07:26.282135 | orchestrator | 14:07:26.279 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-08-29 14:07:26.282163 | orchestrator | 14:07:26.279 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-08-29 14:07:26.282168 | orchestrator | 14:07:26.279 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 14:07:26.282172 | orchestrator | 14:07:26.279 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-08-29 14:07:26.282176 | orchestrator | 14:07:26.279 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-08-29 14:07:26.282180 | orchestrator | 14:07:26.279 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:07:26.282183 | orchestrator | 14:07:26.279 STDOUT terraform:  + device_id = (known after apply) 2025-08-29 14:07:26.282283 | orchestrator | 14:07:26.279 STDOUT terraform:  + device_owner = (known after apply) 2025-08-29 14:07:26.282290 | orchestrator | 14:07:26.280 STDOUT terraform:  + dns_assignment = (known after apply) 2025-08-29 14:07:26.282293 | orchestrator | 14:07:26.280 STDOUT terraform:  + dns_name = (known after apply) 2025-08-29 14:07:26.282297 | orchestrator | 14:07:26.280 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.282301 | orchestrator | 14:07:26.280 STDOUT terraform:  + mac_address = (known after apply) 2025-08-29 14:07:26.282305 | orchestrator | 14:07:26.280 STDOUT terraform:  + network_id = (known after apply) 2025-08-29 14:07:26.282308 | orchestrator | 14:07:26.280 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-08-29 14:07:26.282312 | orchestrator | 14:07:26.280 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-08-29 14:07:26.282316 | orchestrator | 14:07:26.280 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.282320 | orchestrator | 14:07:26.280 STDOUT terraform:  + security_group_ids = (known after apply) 2025-08-29 14:07:26.282324 | orchestrator | 14:07:26.280 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:07:26.282327 | orchestrator | 14:07:26.280 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:07:26.282331 | orchestrator | 14:07:26.280 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-08-29 14:07:26.282335 | orchestrator | 14:07:26.280 STDOUT terraform:  } 2025-08-29 14:07:26.282339 | orchestrator | 14:07:26.280 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:07:26.282343 | orchestrator | 14:07:26.280 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-08-29 14:07:26.282347 | orchestrator | 14:07:26.280 STDOUT terraform:  } 2025-08-29 14:07:26.282350 | orchestrator | 14:07:26.280 STDOUT terraform:  + binding (known after apply) 2025-08-29 14:07:26.282354 | orchestrator | 14:07:26.280 STDOUT terraform:  + fixed_ip { 2025-08-29 14:07:26.282358 | orchestrator | 14:07:26.280 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-08-29 14:07:26.282362 | orchestrator | 14:07:26.280 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 14:07:26.283177 | orchestrator | 14:07:26.280 STDOUT terraform:  } 2025-08-29 14:07:26.283192 | orchestrator | 14:07:26.281 STDOUT terraform:  } 2025-08-29 14:07:26.283197 | orchestrator | 14:07:26.281 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-08-29 14:07:26.283202 | orchestrator | 14:07:26.281 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-08-29 14:07:26.283207 | orchestrator | 14:07:26.281 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 14:07:26.283211 | orchestrator | 14:07:26.281 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-08-29 14:07:26.283216 | orchestrator | 14:07:26.281 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-08-29 14:07:26.283221 | orchestrator | 14:07:26.281 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:07:26.283226 | orchestrator | 14:07:26.281 STDOUT terraform:  + device_id = (known after apply) 2025-08-29 14:07:26.283246 | orchestrator | 14:07:26.281 STDOUT terraform:  + device_owner = (known after apply) 2025-08-29 14:07:26.283253 | orchestrator | 14:07:26.281 STDOUT terraform:  + dns_assignment = (known after apply) 2025-08-29 14:07:26.283260 | orchestrator | 14:07:26.281 STDOUT terraform:  + dns_name = (known after apply) 2025-08-29 14:07:26.283268 | orchestrator | 14:07:26.281 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.283281 | orchestrator | 14:07:26.281 STDOUT terraform:  + mac_address = (known after apply) 2025-08-29 14:07:26.283288 | orchestrator | 14:07:26.281 STDOUT terraform:  + network_id = (known after apply) 2025-08-29 14:07:26.283293 | orchestrator | 14:07:26.281 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-08-29 14:07:26.283297 | orchestrator | 14:07:26.281 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-08-29 14:07:26.283302 | orchestrator | 14:07:26.281 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.284954 | orchestrator | 14:07:26.283 STDOUT terraform:  + security_group_ids = (known after apply) 2025-08-29 14:07:26.285690 | orchestrator | 14:07:26.285 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:07:26.286078 | orchestrator | 14:07:26.285 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:07:26.287740 | orchestrator | 14:07:26.286 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-08-29 14:07:26.287788 | orchestrator | 14:07:26.287 STDOUT terraform:  } 2025-08-29 14:07:26.287826 | orchestrator | 14:07:26.287 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:07:26.287882 | orchestrator | 14:07:26.287 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-08-29 14:07:26.287905 | orchestrator | 14:07:26.287 STDOUT terraform:  } 2025-08-29 14:07:26.287940 | orchestrator | 14:07:26.287 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:07:26.288020 | orchestrator | 14:07:26.287 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-08-29 14:07:26.288045 | orchestrator | 14:07:26.288 STDOUT terraform:  } 2025-08-29 14:07:26.288083 | orchestrator | 14:07:26.288 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:07:26.290729 | orchestrator | 14:07:26.288 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-08-29 14:07:26.290778 | orchestrator | 14:07:26.290 STDOUT terraform:  } 2025-08-29 14:07:26.290820 | orchestrator | 14:07:26.290 STDOUT terraform:  + binding (known after apply) 2025-08-29 14:07:26.290841 | orchestrator | 14:07:26.290 STDOUT terraform:  + fixed_ip { 2025-08-29 14:07:26.290867 | orchestrator | 14:07:26.290 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-08-29 14:07:26.290912 | orchestrator | 14:07:26.290 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 14:07:26.290935 | orchestrator | 14:07:26.290 STDOUT terraform:  } 2025-08-29 14:07:26.290956 | orchestrator | 14:07:26.290 STDOUT terraform:  } 2025-08-29 14:07:26.291052 | orchestrator | 14:07:26.290 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-08-29 14:07:26.291118 | orchestrator | 14:07:26.291 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-08-29 14:07:26.291166 | orchestrator | 14:07:26.291 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 14:07:26.291203 | orchestrator | 14:07:26.291 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-08-29 14:07:26.291250 | orchestrator | 14:07:26.291 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-08-29 14:07:26.291300 | orchestrator | 14:07:26.291 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:07:26.291331 | orchestrator | 14:07:26.291 STDOUT terraform:  + device_id = (known after apply) 2025-08-29 14:07:26.291386 | orchestrator | 14:07:26.291 STDOUT terraform:  + device_owner = (known after apply) 2025-08-29 14:07:26.291431 | orchestrator | 14:07:26.291 STDOUT terraform:  + dns_assignment = (known after apply) 2025-08-29 14:07:26.291473 | orchestrator | 14:07:26.291 STDOUT terraform:  + dns_name = (known after apply) 2025-08-29 14:07:26.291529 | orchestrator | 14:07:26.291 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.291572 | orchestrator | 14:07:26.291 STDOUT terraform:  + mac_address = (known after apply) 2025-08-29 14:07:26.291609 | orchestrator | 14:07:26.291 STDOUT terraform:  + network_id = (known after apply) 2025-08-29 14:07:26.291643 | orchestrator | 14:07:26.291 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-08-29 14:07:26.291683 | orchestrator | 14:07:26.291 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-08-29 14:07:26.291742 | orchestrator | 14:07:26.291 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.291802 | orchestrator | 14:07:26.291 STDOUT terraform:  + security_group_ids = (known after apply) 2025-08-29 14:07:26.291858 | orchestrator | 14:07:26.291 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:07:26.291880 | orchestrator | 14:07:26.291 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:07:26.291915 | orchestrator | 14:07:26.291 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-08-29 14:07:26.291930 | orchestrator | 14:07:26.291 STDOUT terraform:  } 2025-08-29 14:07:26.291961 | orchestrator | 14:07:26.291 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:07:26.291994 | orchestrator | 14:07:26.291 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-08-29 14:07:26.292009 | orchestrator | 14:07:26.291 STDOUT terraform:  } 2025-08-29 14:07:26.292029 | orchestrator | 14:07:26.292 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:07:26.292056 | orchestrator | 14:07:26.292 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-08-29 14:07:26.292070 | orchestrator | 14:07:26.292 STDOUT terraform:  } 2025-08-29 14:07:26.292089 | orchestrator | 14:07:26.292 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:07:26.292117 | orchestrator | 14:07:26.292 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-08-29 14:07:26.292136 | orchestrator | 14:07:26.292 STDOUT terraform:  } 2025-08-29 14:07:26.292174 | orchestrator | 14:07:26.292 STDOUT terraform:  + binding (known after apply) 2025-08-29 14:07:26.292198 | orchestrator | 14:07:26.292 STDOUT terraform:  + fixed_ip { 2025-08-29 14:07:26.292223 | orchestrator | 14:07:26.292 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-08-29 14:07:26.292251 | orchestrator | 14:07:26.292 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 14:07:26.292270 | orchestrator | 14:07:26.292 STDOUT terraform:  } 2025-08-29 14:07:26.292284 | orchestrator | 14:07:26.292 STDOUT terraform:  } 2025-08-29 14:07:26.292329 | orchestrator | 14:07:26.292 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-08-29 14:07:26.292378 | orchestrator | 14:07:26.292 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-08-29 14:07:26.292413 | orchestrator | 14:07:26.292 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 14:07:26.292455 | orchestrator | 14:07:26.292 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-08-29 14:07:26.292498 | orchestrator | 14:07:26.292 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-08-29 14:07:26.292564 | orchestrator | 14:07:26.292 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:07:26.292624 | orchestrator | 14:07:26.292 STDOUT terraform:  + device_id = (known after apply) 2025-08-29 14:07:26.292673 | orchestrator | 14:07:26.292 STDOUT terraform:  + device_owner = (known after apply) 2025-08-29 14:07:26.292720 | orchestrator | 14:07:26.292 STDOUT terraform:  + dns_assignment = (known after apply) 2025-08-29 14:07:26.292756 | orchestrator | 14:07:26.292 STDOUT terraform:  + dns_name = (known after apply) 2025-08-29 14:07:26.292792 | orchestrator | 14:07:26.292 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.292828 | orchestrator | 14:07:26.292 STDOUT terraform:  + mac_address = (known after apply) 2025-08-29 14:07:26.292867 | orchestrator | 14:07:26.292 STDOUT terraform:  + network_id = (known after apply) 2025-08-29 14:07:26.292901 | orchestrator | 14:07:26.292 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-08-29 14:07:26.292936 | orchestrator | 14:07:26.292 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-08-29 14:07:26.292972 | orchestrator | 14:07:26.292 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.293021 | orchestrator | 14:07:26.292 STDOUT terraform:  + security_group_ids = (known after apply) 2025-08-29 14:07:26.293057 | orchestrator | 14:07:26.293 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:07:26.293076 | orchestrator | 14:07:26.293 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:07:26.293104 | orchestrator | 14:07:26.293 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-08-29 14:07:26.293122 | orchestrator | 14:07:26.293 STDOUT terraform:  } 2025-08-29 14:07:26.293142 | orchestrator | 14:07:26.293 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:07:26.293183 | orchestrator | 14:07:26.293 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-08-29 14:07:26.293189 | orchestrator | 14:07:26.293 STDOUT terraform:  } 2025-08-29 14:07:26.293210 | orchestrator | 14:07:26.293 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:07:26.293256 | orchestrator | 14:07:26.293 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-08-29 14:07:26.293268 | orchestrator | 14:07:26.293 STDOUT terraform:  } 2025-08-29 14:07:26.293293 | orchestrator | 14:07:26.293 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:07:26.293321 | orchestrator | 14:07:26.293 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-08-29 14:07:26.293327 | orchestrator | 14:07:26.293 STDOUT terraform:  } 2025-08-29 14:07:26.293353 | orchestrator | 14:07:26.293 STDOUT terraform:  + binding (known after apply) 2025-08-29 14:07:26.293366 | orchestrator | 14:07:26.293 STDOUT terraform:  + fixed_ip { 2025-08-29 14:07:26.293390 | orchestrator | 14:07:26.293 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-08-29 14:07:26.293429 | orchestrator | 14:07:26.293 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 14:07:26.293443 | orchestrator | 14:07:26.293 STDOUT terraform:  } 2025-08-29 14:07:26.293449 | orchestrator | 14:07:26.293 STDOUT terraform:  } 2025-08-29 14:07:26.293499 | orchestrator | 14:07:26.293 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-08-29 14:07:26.293568 | orchestrator | 14:07:26.293 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-08-29 14:07:26.293603 | orchestrator | 14:07:26.293 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 14:07:26.293637 | orchestrator | 14:07:26.293 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-08-29 14:07:26.293670 | orchestrator | 14:07:26.293 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-08-29 14:07:26.293710 | orchestrator | 14:07:26.293 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:07:26.293747 | orchestrator | 14:07:26.293 STDOUT terraform:  + device_id = (known after apply) 2025-08-29 14:07:26.293781 | orchestrator | 14:07:26.293 STDOUT terraform:  + device_owner = (known after apply) 2025-08-29 14:07:26.293826 | orchestrator | 14:07:26.293 STDOUT terraform:  + dns_assignment = (known after apply) 2025-08-29 14:07:26.293856 | orchestrator | 14:07:26.293 STDOUT terraform:  + dns_name = (known after apply) 2025-08-29 14:07:26.293893 | orchestrator | 14:07:26.293 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.293925 | orchestrator | 14:07:26.293 STDOUT terraform:  + mac_address = (known after apply) 2025-08-29 14:07:26.293977 | orchestrator | 14:07:26.293 STDOUT terraform:  + network_id = (known after apply) 2025-08-29 14:07:26.294028 | orchestrator | 14:07:26.293 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-08-29 14:07:26.294302 | orchestrator | 14:07:26.294 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-08-29 14:07:26.294806 | orchestrator | 14:07:26.294 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.295604 | orchestrator | 14:07:26.294 STDOUT terraform:  + security_group_ids = (known after apply) 2025-08-29 14:07:26.295863 | orchestrator | 14:07:26.295 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:07:26.296046 | orchestrator | 14:07:26.295 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:07:26.296188 | orchestrator | 14:07:26.296 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-08-29 14:07:26.296349 | orchestrator | 14:07:26.296 STDOUT terraform:  } 2025-08-29 14:07:26.296421 | orchestrator | 14:07:26.296 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:07:26.296719 | orchestrator | 14:07:26.296 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-08-29 14:07:26.296985 | orchestrator | 14:07:26.296 STDOUT terraform:  } 2025-08-29 14:07:26.297126 | orchestrator | 14:07:26.297 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:07:26.297476 | orchestrator | 14:07:26.297 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-08-29 14:07:26.297950 | orchestrator | 14:07:26.297 STDOUT terraform:  } 2025-08-29 14:07:26.297962 | orchestrator | 14:07:26.297 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:07:26.300630 | orchestrator | 14:07:26.297 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-08-29 14:07:26.300670 | orchestrator | 14:07:26.298 STDOUT terraform:  } 2025-08-29 14:07:26.300676 | orchestrator | 14:07:26.298 STDOUT terraform:  + binding (known after apply) 2025-08-29 14:07:26.300680 | orchestrator | 14:07:26.298 STDOUT terraform:  + fixed_ip { 2025-08-29 14:07:26.300685 | orchestrator | 14:07:26.298 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-08-29 14:07:26.300689 | orchestrator | 14:07:26.299 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 14:07:26.300693 | orchestrator | 14:07:26.299 STDOUT terraform:  } 2025-08-29 14:07:26.300697 | orchestrator | 14:07:26.299 STDOUT terraform:  } 2025-08-29 14:07:26.300701 | orchestrator | 14:07:26.299 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-08-29 14:07:26.300705 | orchestrator | 14:07:26.300 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-08-29 14:07:26.301305 | orchestrator | 14:07:26.300 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 14:07:26.301444 | orchestrator | 14:07:26.300 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-08-29 14:07:26.302509 | orchestrator | 14:07:26.301 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-08-29 14:07:26.303233 | orchestrator | 14:07:26.302 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:07:26.304063 | orchestrator | 14:07:26.303 STDOUT terraform:  + device_id = (known after apply) 2025-08-29 14:07:26.304301 | orchestrator | 14:07:26.304 STDOUT terraform:  + device_owner = (known after apply) 2025-08-29 14:07:26.304343 | orchestrator | 14:07:26.304 STDOUT terraform:  + dns_assignment = (known after apply) 2025-08-29 14:07:26.304394 | orchestrator | 14:07:26.304 STDOUT terraform:  + dns_name = (known after apply) 2025-08-29 14:07:26.304430 | orchestrator | 14:07:26.304 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.304465 | orchestrator | 14:07:26.304 STDOUT terraform:  + mac_address = (known after apply) 2025-08-29 14:07:26.304500 | orchestrator | 14:07:26.304 STDOUT terraform:  + network_id = (known after apply) 2025-08-29 14:07:26.304570 | orchestrator | 14:07:26.304 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-08-29 14:07:26.304601 | orchestrator | 14:07:26.304 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-08-29 14:07:26.304636 | orchestrator | 14:07:26.304 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.304676 | orchestrator | 14:07:26.304 STDOUT terraform:  + security_group_ids = (known after apply) 2025-08-29 14:07:26.304712 | orchestrator | 14:07:26.304 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:07:26.304745 | orchestrator | 14:07:26.304 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:07:26.304773 | orchestrator | 14:07:26.304 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-08-29 14:07:26.304787 | orchestrator | 14:07:26.304 STDOUT terraform:  } 2025-08-29 14:07:26.304807 | orchestrator | 14:07:26.304 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:07:26.304836 | orchestrator | 14:07:26.304 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-08-29 14:07:26.304849 | orchestrator | 14:07:26.304 STDOUT terraform:  } 2025-08-29 14:07:26.304869 | orchestrator | 14:07:26.304 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:07:26.304902 | orchestrator | 14:07:26.304 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-08-29 14:07:26.304921 | orchestrator | 14:07:26.304 STDOUT terraform:  } 2025-08-29 14:07:26.304940 | orchestrator | 14:07:26.304 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:07:26.304977 | orchestrator | 14:07:26.304 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-08-29 14:07:26.304991 | orchestrator | 14:07:26.304 STDOUT terraform:  } 2025-08-29 14:07:26.305013 | orchestrator | 14:07:26.304 STDOUT terraform:  + binding (known after apply) 2025-08-29 14:07:26.305033 | orchestrator | 14:07:26.305 STDOUT terraform:  + fixed_ip { 2025-08-29 14:07:26.305057 | orchestrator | 14:07:26.305 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-08-29 14:07:26.305090 | orchestrator | 14:07:26.305 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 14:07:26.305112 | orchestrator | 14:07:26.305 STDOUT terraform:  } 2025-08-29 14:07:26.305126 | orchestrator | 14:07:26.305 STDOUT terraform:  } 2025-08-29 14:07:26.305171 | orchestrator | 14:07:26.305 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-08-29 14:07:26.305220 | orchestrator | 14:07:26.305 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-08-29 14:07:26.305260 | orchestrator | 14:07:26.305 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 14:07:26.305303 | orchestrator | 14:07:26.305 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-08-29 14:07:26.305338 | orchestrator | 14:07:26.305 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-08-29 14:07:26.305372 | orchestrator | 14:07:26.305 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:07:26.305413 | orchestrator | 14:07:26.305 STDOUT terraform:  + device_id = (known after apply) 2025-08-29 14:07:26.305448 | orchestrator | 14:07:26.305 STDOUT terraform:  + device_owner = (known after apply) 2025-08-29 14:07:26.305482 | orchestrator | 14:07:26.305 STDOUT terraform:  + dns_assignment = (known after apply) 2025-08-29 14:07:26.305541 | orchestrator | 14:07:26.305 STDOUT terraform:  + dns_name = (known after apply) 2025-08-29 14:07:26.305570 | orchestrator | 14:07:26.305 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.305614 | orchestrator | 14:07:26.305 STDOUT terraform:  + mac_address = (known after apply) 2025-08-29 14:07:26.305650 | orchestrator | 14:07:26.305 STDOUT terraform:  + network_id = (known after apply) 2025-08-29 14:07:26.305696 | orchestrator | 14:07:26.305 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-08-29 14:07:26.305732 | orchestrator | 14:07:26.305 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-08-29 14:07:26.305767 | orchestrator | 14:07:26.305 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.305800 | orchestrator | 14:07:26.305 STDOUT terraform:  + security_group_ids = (known after apply) 2025-08-29 14:07:26.305840 | orchestrator | 14:07:26.305 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:07:26.305859 | orchestrator | 14:07:26.305 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:07:26.305892 | orchestrator | 14:07:26.305 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-08-29 14:07:26.305906 | orchestrator | 14:07:26.305 STDOUT terraform:  2025-08-29 14:07:26.306009 | orchestrator | 14:07:26.305 STDOUT terraform:  } 2025-08-29 14:07:26.306049 | orchestrator | 14:07:26.306 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:07:26.306077 | orchestrator | 14:07:26.306 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-08-29 14:07:26.306100 | orchestrator | 14:07:26.306 STDOUT terraform:  } 2025-08-29 14:07:26.306119 | orchestrator | 14:07:26.306 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:07:26.306162 | orchestrator | 14:07:26.306 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-08-29 14:07:26.306175 | orchestrator | 14:07:26.306 STDOUT terraform:  } 2025-08-29 14:07:26.306194 | orchestrator | 14:07:26.306 STDOUT terraform:  + allowed_address_pairs { 2025-08-29 14:07:26.306234 | orchestrator | 14:07:26.306 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-08-29 14:07:26.306267 | orchestrator | 14:07:26.306 STDOUT terraform:  } 2025-08-29 14:07:26.306306 | orchestrator | 14:07:26.306 STDOUT terraform:  + binding (known after apply) 2025-08-29 14:07:26.306333 | orchestrator | 14:07:26.306 STDOUT terraform:  + fixed_ip { 2025-08-29 14:07:26.306373 | orchestrator | 14:07:26.306 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-08-29 14:07:26.306427 | orchestrator | 14:07:26.306 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 14:07:26.306443 | orchestrator | 14:07:26.306 STDOUT terraform:  } 2025-08-29 14:07:26.306456 | orchestrator | 14:07:26.306 STDOUT terraform:  } 2025-08-29 14:07:26.306503 | orchestrator | 14:07:26.306 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-08-29 14:07:26.306562 | orchestrator | 14:07:26.306 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-08-29 14:07:26.306581 | orchestrator | 14:07:26.306 STDOUT terraform:  + force_destroy = false 2025-08-29 14:07:26.306621 | orchestrator | 14:07:26.306 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.306650 | orchestrator | 14:07:26.306 STDOUT terraform:  + port_id = (known after apply) 2025-08-29 14:07:26.306678 | orchestrator | 14:07:26.306 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.306705 | orchestrator | 14:07:26.306 STDOUT terraform:  + router_id = (known after apply) 2025-08-29 14:07:26.306733 | orchestrator | 14:07:26.306 STDOUT terraform:  + subnet_id = (known after apply) 2025-08-29 14:07:26.306761 | orchestrator | 14:07:26.306 STDOUT terraform:  } 2025-08-29 14:07:26.306797 | orchestrator | 14:07:26.306 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-08-29 14:07:26.306842 | orchestrator | 14:07:26.306 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-08-29 14:07:26.306896 | orchestrator | 14:07:26.306 STDOUT terraform:  + admin_state_up = (known after apply) 2025-08-29 14:07:26.306937 | orchestrator | 14:07:26.306 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:07:26.306962 | orchestrator | 14:07:26.306 STDOUT terraform:  + availability_zone_hints = [ 2025-08-29 14:07:26.306976 | orchestrator | 14:07:26.306 STDOUT terraform:  + "nova", 2025-08-29 14:07:26.306994 | orchestrator | 14:07:26.306 STDOUT terraform:  ] 2025-08-29 14:07:26.307030 | orchestrator | 14:07:26.306 STDOUT terraform:  + distributed = (known after apply) 2025-08-29 14:07:26.307076 | orchestrator | 14:07:26.307 STDOUT terraform:  + enable_snat = (known after apply) 2025-08-29 14:07:26.307645 | orchestrator | 14:07:26.307 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-08-29 14:07:26.307919 | orchestrator | 14:07:26.307 STDOUT terraform:  + external_qos_policy_id = (known after apply) 2025-08-29 14:07:26.308012 | orchestrator | 14:07:26.307 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.308228 | orchestrator | 14:07:26.308 STDOUT terraform:  + name = "testbed" 2025-08-29 14:07:26.308586 | orchestrator | 14:07:26.308 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.308767 | orchestrator | 14:07:26.308 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:07:26.310091 | orchestrator | 14:07:26.308 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-08-29 14:07:26.310129 | orchestrator | 14:07:26.308 STDOUT terraform:  } 2025-08-29 14:07:26.310135 | orchestrator | 14:07:26.308 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-08-29 14:07:26.310140 | orchestrator | 14:07:26.308 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-08-29 14:07:26.310144 | orchestrator | 14:07:26.308 STDOUT terraform:  + description = "ssh" 2025-08-29 14:07:26.310149 | orchestrator | 14:07:26.308 STDOUT terraform:  + direction = "ingress" 2025-08-29 14:07:26.310153 | orchestrator | 14:07:26.308 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 14:07:26.310157 | orchestrator | 14:07:26.309 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.310161 | orchestrator | 14:07:26.309 STDOUT terraform:  + port_range_max = 22 2025-08-29 14:07:26.310174 | orchestrator | 14:07:26.309 STDOUT terraform:  + port_range_min = 22 2025-08-29 14:07:26.310178 | orchestrator | 14:07:26.309 STDOUT terraform:  + protocol = "tcp" 2025-08-29 14:07:26.310182 | orchestrator | 14:07:26.309 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.310186 | orchestrator | 14:07:26.309 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 14:07:26.310190 | orchestrator | 14:07:26.309 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 14:07:26.310194 | orchestrator | 14:07:26.309 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-08-29 14:07:26.310197 | orchestrator | 14:07:26.309 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 14:07:26.310201 | orchestrator | 14:07:26.309 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:07:26.310205 | orchestrator | 14:07:26.309 STDOUT terraform:  } 2025-08-29 14:07:26.310209 | orchestrator | 14:07:26.309 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-08-29 14:07:26.310213 | orchestrator | 14:07:26.309 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-08-29 14:07:26.310217 | orchestrator | 14:07:26.309 STDOUT terraform:  + description = "wireguard" 2025-08-29 14:07:26.310221 | orchestrator | 14:07:26.309 STDOUT terraform:  + direction = "ingress" 2025-08-29 14:07:26.310225 | orchestrator | 14:07:26.309 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 14:07:26.310228 | orchestrator | 14:07:26.309 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.310232 | orchestrator | 14:07:26.309 STDOUT terraform:  + port_range_max = 51820 2025-08-29 14:07:26.310236 | orchestrator | 14:07:26.309 STDOUT terraform:  + port_range_min = 51820 2025-08-29 14:07:26.310240 | orchestrator | 14:07:26.309 STDOUT terraform:  + protocol = "udp" 2025-08-29 14:07:26.310244 | orchestrator | 14:07:26.309 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.310247 | orchestrator | 14:07:26.309 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 14:07:26.310251 | orchestrator | 14:07:26.309 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 14:07:26.310255 | orchestrator | 14:07:26.309 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-08-29 14:07:26.310263 | orchestrator | 14:07:26.309 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 14:07:26.310268 | orchestrator | 14:07:26.309 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:07:26.310272 | orchestrator | 14:07:26.309 STDOUT terraform:  } 2025-08-29 14:07:26.310276 | orchestrator | 14:07:26.309 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-08-29 14:07:26.310287 | orchestrator | 14:07:26.309 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-08-29 14:07:26.310292 | orchestrator | 14:07:26.309 STDOUT terraform:  + direction = "ingress" 2025-08-29 14:07:26.310296 | orchestrator | 14:07:26.309 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 14:07:26.310304 | orchestrator | 14:07:26.309 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.310308 | orchestrator | 14:07:26.309 STDOUT terraform:  + protocol = "tcp" 2025-08-29 14:07:26.310312 | orchestrator | 14:07:26.309 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.310316 | orchestrator | 14:07:26.309 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 14:07:26.310319 | orchestrator | 14:07:26.310 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 14:07:26.310323 | orchestrator | 14:07:26.310 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-08-29 14:07:26.310327 | orchestrator | 14:07:26.310 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 14:07:26.310331 | orchestrator | 14:07:26.310 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:07:26.310335 | orchestrator | 14:07:26.310 STDOUT terraform:  } 2025-08-29 14:07:26.310339 | orchestrator | 14:07:26.310 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-08-29 14:07:26.310343 | orchestrator | 14:07:26.310 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-08-29 14:07:26.310349 | orchestrator | 14:07:26.310 STDOUT terraform:  + direction = "ingress" 2025-08-29 14:07:26.310353 | orchestrator | 14:07:26.310 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 14:07:26.310357 | orchestrator | 14:07:26.310 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.310362 | orchestrator | 14:07:26.310 STDOUT terraform:  + protocol = "udp" 2025-08-29 14:07:26.310398 | orchestrator | 14:07:26.310 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.310447 | orchestrator | 14:07:26.310 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 14:07:26.310464 | orchestrator | 14:07:26.310 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 14:07:26.310508 | orchestrator | 14:07:26.310 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-08-29 14:07:26.310568 | orchestrator | 14:07:26.310 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 14:07:26.310605 | orchestrator | 14:07:26.310 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:07:26.310612 | orchestrator | 14:07:26.310 STDOUT terraform:  } 2025-08-29 14:07:26.318094 | orchestrator | 14:07:26.310 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-08-29 14:07:26.318136 | orchestrator | 14:07:26.311 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-08-29 14:07:26.318142 | orchestrator | 14:07:26.315 STDOUT terraform:  + direction = "ingress" 2025-08-29 14:07:26.318147 | orchestrator | 14:07:26.315 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 14:07:26.318151 | orchestrator | 14:07:26.315 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.318155 | orchestrator | 14:07:26.315 STDOUT terraform:  + protocol = "icmp" 2025-08-29 14:07:26.318173 | orchestrator | 14:07:26.315 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.318177 | orchestrator | 14:07:26.315 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 14:07:26.318181 | orchestrator | 14:07:26.315 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 14:07:26.318185 | orchestrator | 14:07:26.315 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-08-29 14:07:26.318189 | orchestrator | 14:07:26.315 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 14:07:26.318193 | orchestrator | 14:07:26.315 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:07:26.318197 | orchestrator | 14:07:26.315 STDOUT terraform:  } 2025-08-29 14:07:26.318200 | orchestrator | 14:07:26.315 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-08-29 14:07:26.318204 | orchestrator | 14:07:26.315 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-08-29 14:07:26.318208 | orchestrator | 14:07:26.315 STDOUT terraform:  + direction = "ingress" 2025-08-29 14:07:26.318212 | orchestrator | 14:07:26.315 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 14:07:26.318216 | orchestrator | 14:07:26.315 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.318219 | orchestrator | 14:07:26.315 STDOUT terraform:  + protocol = "tcp" 2025-08-29 14:07:26.318223 | orchestrator | 14:07:26.315 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.318227 | orchestrator | 14:07:26.315 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 14:07:26.318231 | orchestrator | 14:07:26.316 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 14:07:26.318234 | orchestrator | 14:07:26.316 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-08-29 14:07:26.318238 | orchestrator | 14:07:26.316 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 14:07:26.318242 | orchestrator | 14:07:26.316 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:07:26.318245 | orchestrator | 14:07:26.316 STDOUT terraform:  } 2025-08-29 14:07:26.318249 | orchestrator | 14:07:26.316 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-08-29 14:07:26.318253 | orchestrator | 14:07:26.316 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-08-29 14:07:26.318257 | orchestrator | 14:07:26.316 STDOUT terraform:  + direction = "ingress" 2025-08-29 14:07:26.318261 | orchestrator | 14:07:26.316 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 14:07:26.318265 | orchestrator | 14:07:26.316 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.318268 | orchestrator | 14:07:26.316 STDOUT terraform:  + protocol = "udp" 2025-08-29 14:07:26.318272 | orchestrator | 14:07:26.316 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.318276 | orchestrator | 14:07:26.316 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 14:07:26.318283 | orchestrator | 14:07:26.316 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 14:07:26.318286 | orchestrator | 14:07:26.316 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-08-29 14:07:26.318295 | orchestrator | 14:07:26.316 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 14:07:26.318300 | orchestrator | 14:07:26.316 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:07:26.318304 | orchestrator | 14:07:26.316 STDOUT terraform:  } 2025-08-29 14:07:26.318308 | orchestrator | 14:07:26.316 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-08-29 14:07:26.318311 | orchestrator | 14:07:26.316 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-08-29 14:07:26.318316 | orchestrator | 14:07:26.316 STDOUT terraform:  + direction = "ingress" 2025-08-29 14:07:26.318320 | orchestrator | 14:07:26.316 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 14:07:26.318324 | orchestrator | 14:07:26.316 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.318327 | orchestrator | 14:07:26.316 STDOUT terraform:  + protocol = "icmp" 2025-08-29 14:07:26.318331 | orchestrator | 14:07:26.316 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.318335 | orchestrator | 14:07:26.316 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 14:07:26.318339 | orchestrator | 14:07:26.316 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 14:07:26.318343 | orchestrator | 14:07:26.316 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-08-29 14:07:26.318347 | orchestrator | 14:07:26.316 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 14:07:26.318350 | orchestrator | 14:07:26.316 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:07:26.318354 | orchestrator | 14:07:26.316 STDOUT terraform:  } 2025-08-29 14:07:26.318358 | orchestrator | 14:07:26.316 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-08-29 14:07:26.318362 | orchestrator | 14:07:26.316 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-08-29 14:07:26.318366 | orchestrator | 14:07:26.316 STDOUT terraform:  + description = "vrrp" 2025-08-29 14:07:26.318388 | orchestrator | 14:07:26.317 STDOUT terraform:  + direction = "ingress" 2025-08-29 14:07:26.318393 | orchestrator | 14:07:26.317 STDOUT terraform:  + ethertype = "IPv4" 2025-08-29 14:07:26.318396 | orchestrator | 14:07:26.317 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.318400 | orchestrator | 14:07:26.317 STDOUT terraform:  + protocol = "112" 2025-08-29 14:07:26.318404 | orchestrator | 14:07:26.317 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.318408 | orchestrator | 14:07:26.317 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-08-29 14:07:26.318412 | orchestrator | 14:07:26.317 STDOUT terraform:  + remote_group_id = (known after apply) 2025-08-29 14:07:26.318416 | orchestrator | 14:07:26.317 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-08-29 14:07:26.318422 | orchestrator | 14:07:26.317 STDOUT terraform:  + security_group_id = (known after apply) 2025-08-29 14:07:26.318426 | orchestrator | 14:07:26.317 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:07:26.318430 | orchestrator | 14:07:26.317 STDOUT terraform:  } 2025-08-29 14:07:26.318434 | orchestrator | 14:07:26.317 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-08-29 14:07:26.318438 | orchestrator | 14:07:26.317 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-08-29 14:07:26.318442 | orchestrator | 14:07:26.317 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:07:26.318446 | orchestrator | 14:07:26.317 STDOUT terraform:  + description = "management security group" 2025-08-29 14:07:26.318449 | orchestrator | 14:07:26.317 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.318453 | orchestrator | 14:07:26.317 STDOUT terraform:  + name = "testbed-management" 2025-08-29 14:07:26.318459 | orchestrator | 14:07:26.317 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.318463 | orchestrator | 14:07:26.317 STDOUT terraform:  + stateful = (known after apply) 2025-08-29 14:07:26.318467 | orchestrator | 14:07:26.317 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:07:26.318471 | orchestrator | 14:07:26.317 STDOUT terraform:  } 2025-08-29 14:07:26.318475 | orchestrator | 14:07:26.317 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-08-29 14:07:26.318480 | orchestrator | 14:07:26.317 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-08-29 14:07:26.318484 | orchestrator | 14:07:26.317 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:07:26.318488 | orchestrator | 14:07:26.317 STDOUT terraform:  + description = "node security group" 2025-08-29 14:07:26.318492 | orchestrator | 14:07:26.317 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.318496 | orchestrator | 14:07:26.317 STDOUT terraform:  + name = "testbed-node" 2025-08-29 14:07:26.318500 | orchestrator | 14:07:26.317 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.318503 | orchestrator | 14:07:26.317 STDOUT terraform:  + stateful = (known after apply) 2025-08-29 14:07:26.318507 | orchestrator | 14:07:26.317 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:07:26.318511 | orchestrator | 14:07:26.317 STDOUT terraform:  } 2025-08-29 14:07:26.318539 | orchestrator | 14:07:26.317 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-08-29 14:07:26.318543 | orchestrator | 14:07:26.317 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-08-29 14:07:26.318547 | orchestrator | 14:07:26.317 STDOUT terraform:  + all_tags = (known after apply) 2025-08-29 14:07:26.318551 | orchestrator | 14:07:26.317 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-08-29 14:07:26.318554 | orchestrator | 14:07:26.317 STDOUT terraform:  + dns_nameservers = [ 2025-08-29 14:07:26.318558 | orchestrator | 14:07:26.317 STDOUT terraform:  + "8.8.8.8", 2025-08-29 14:07:26.318566 | orchestrator | 14:07:26.317 STDOUT terraform:  + "9.9.9.9", 2025-08-29 14:07:26.318570 | orchestrator | 14:07:26.317 STDOUT terraform:  ] 2025-08-29 14:07:26.318574 | orchestrator | 14:07:26.317 STDOUT terraform:  + enable_dhcp = true 2025-08-29 14:07:26.318577 | orchestrator | 14:07:26.317 STDOUT terraform:  + gateway_ip = (known after apply) 2025-08-29 14:07:26.319116 | orchestrator | 14:07:26.318 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.319286 | orchestrator | 14:07:26.319 STDOUT terraform:  + ip_version = 4 2025-08-29 14:07:26.320324 | orchestrator | 14:07:26.319 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-08-29 14:07:26.320341 | orchestrator | 14:07:26.319 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-08-29 14:07:26.320344 | orchestrator | 14:07:26.319 STDOUT terraform:  + name = "subnet-testbed-management" 2025-08-29 14:07:26.320348 | orchestrator | 14:07:26.319 STDOUT terraform:  + network_id = (known after apply) 2025-08-29 14:07:26.320352 | orchestrator | 14:07:26.319 STDOUT terraform:  + no_gateway = false 2025-08-29 14:07:26.320356 | orchestrator | 14:07:26.319 STDOUT terraform:  + region = (known after apply) 2025-08-29 14:07:26.320360 | orchestrator | 14:07:26.319 STDOUT terraform:  + service_types = (known after apply) 2025-08-29 14:07:26.320363 | orchestrator | 14:07:26.319 STDOUT terraform:  + tenant_id = (known after apply) 2025-08-29 14:07:26.320367 | orchestrator | 14:07:26.320 STDOUT terraform:  + allocation_pool { 2025-08-29 14:07:26.320371 | orchestrator | 14:07:26.320 STDOUT terraform:  + end = "192.168.31.250" 2025-08-29 14:07:26.320375 | orchestrator | 14:07:26.320 STDOUT terraform:  + start = "192.168.31.200" 2025-08-29 14:07:26.320379 | orchestrator | 14:07:26.320 STDOUT terraform:  } 2025-08-29 14:07:26.320382 | orchestrator | 14:07:26.320 STDOUT terraform:  } 2025-08-29 14:07:26.320386 | orchestrator | 14:07:26.320 STDOUT terraform:  # terraform_data.image will be created 2025-08-29 14:07:26.320390 | orchestrator | 14:07:26.320 STDOUT terraform:  + resource "terraform_data" "image" { 2025-08-29 14:07:26.320394 | orchestrator | 14:07:26.320 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.320397 | orchestrator | 14:07:26.320 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-08-29 14:07:26.320401 | orchestrator | 14:07:26.320 STDOUT terraform:  + output = (known after apply) 2025-08-29 14:07:26.320405 | orchestrator | 14:07:26.320 STDOUT terraform:  } 2025-08-29 14:07:26.320414 | orchestrator | 14:07:26.320 STDOUT terraform:  # terraform_data.image_node will be created 2025-08-29 14:07:26.320418 | orchestrator | 14:07:26.320 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-08-29 14:07:26.320422 | orchestrator | 14:07:26.320 STDOUT terraform:  + id = (known after apply) 2025-08-29 14:07:26.320426 | orchestrator | 14:07:26.320 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-08-29 14:07:26.320433 | orchestrator | 14:07:26.320 STDOUT terraform:  + output = (known after apply) 2025-08-29 14:07:26.320436 | orchestrator | 14:07:26.320 STDOUT terraform:  } 2025-08-29 14:07:26.320440 | orchestrator | 14:07:26.320 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-08-29 14:07:26.320454 | orchestrator | 14:07:26.320 STDOUT terraform: Changes to Outputs: 2025-08-29 14:07:26.320458 | orchestrator | 14:07:26.320 STDOUT terraform:  + manager_address = (sensitive value) 2025-08-29 14:07:26.320463 | orchestrator | 14:07:26.320 STDOUT terraform:  + private_key = (sensitive value) 2025-08-29 14:07:29.697063 | orchestrator | 14:07:29.693 STDOUT terraform: terraform_data.image_node: Creating... 2025-08-29 14:07:29.697113 | orchestrator | 14:07:29.693 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=f086fd94-4cdb-3d03-683e-5a5039d744cb] 2025-08-29 14:07:29.697120 | orchestrator | 14:07:29.693 STDOUT terraform: terraform_data.image: Creating... 2025-08-29 14:07:29.701816 | orchestrator | 14:07:29.697 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=ec1ddff3-977d-bee1-578e-224670755ce8] 2025-08-29 14:07:29.708384 | orchestrator | 14:07:29.707 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-08-29 14:07:29.721921 | orchestrator | 14:07:29.718 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-08-29 14:07:29.721959 | orchestrator | 14:07:29.719 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-08-29 14:07:29.721964 | orchestrator | 14:07:29.721 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-08-29 14:07:29.721968 | orchestrator | 14:07:29.721 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-08-29 14:07:29.730073 | orchestrator | 14:07:29.726 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-08-29 14:07:29.730110 | orchestrator | 14:07:29.726 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-08-29 14:07:29.730115 | orchestrator | 14:07:29.726 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-08-29 14:07:29.730120 | orchestrator | 14:07:29.726 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-08-29 14:07:29.735074 | orchestrator | 14:07:29.734 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-08-29 14:07:30.187122 | orchestrator | 14:07:30.186 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-08-29 14:07:30.194258 | orchestrator | 14:07:30.194 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-08-29 14:07:30.246251 | orchestrator | 14:07:30.245 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2025-08-29 14:07:30.252691 | orchestrator | 14:07:30.252 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-08-29 14:07:30.731561 | orchestrator | 14:07:30.731 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 1s [id=86362647-0026-4802-8a8c-54dca29f6738] 2025-08-29 14:07:30.736207 | orchestrator | 14:07:30.735 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-08-29 14:07:30.792456 | orchestrator | 14:07:30.792 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-08-29 14:07:30.801242 | orchestrator | 14:07:30.800 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-08-29 14:07:33.365395 | orchestrator | 14:07:33.365 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=fa9350c4-64bc-4afb-b502-f801a6f70a24] 2025-08-29 14:07:33.378730 | orchestrator | 14:07:33.378 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-08-29 14:07:33.387012 | orchestrator | 14:07:33.386 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=d4d11aa1-e648-4125-bb7f-b16cf1114c9f] 2025-08-29 14:07:33.387061 | orchestrator | 14:07:33.386 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=d9bfb73537860db344fcd0d25de0d37df4741e77] 2025-08-29 14:07:33.410216 | orchestrator | 14:07:33.409 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-08-29 14:07:33.448679 | orchestrator | 14:07:33.448 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=b50f501b-7dcc-49bb-af34-bcea70be6a61] 2025-08-29 14:07:33.459335 | orchestrator | 14:07:33.459 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=54964cbc-4c5d-4365-aa24-d13bcc6e495a] 2025-08-29 14:07:33.459380 | orchestrator | 14:07:33.459 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-08-29 14:07:33.471855 | orchestrator | 14:07:33.471 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=ce9b0281-40bf-44ad-b4c6-e5b614e2c1c9] 2025-08-29 14:07:33.476734 | orchestrator | 14:07:33.476 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=94dd18c58fd0996a8251a38789eab550ec668523] 2025-08-29 14:07:33.476909 | orchestrator | 14:07:33.476 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-08-29 14:07:33.477009 | orchestrator | 14:07:33.476 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-08-29 14:07:33.483579 | orchestrator | 14:07:33.483 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-08-29 14:07:33.492112 | orchestrator | 14:07:33.492 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-08-29 14:07:33.506495 | orchestrator | 14:07:33.506 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=00b08f76-6c14-40db-8d96-1843b494176b] 2025-08-29 14:07:33.508482 | orchestrator | 14:07:33.508 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=34b7b0aa-9c3f-4af7-b9a4-6261675e7012] 2025-08-29 14:07:33.513713 | orchestrator | 14:07:33.513 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-08-29 14:07:33.515106 | orchestrator | 14:07:33.515 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-08-29 14:07:33.594753 | orchestrator | 14:07:33.594 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=8e840163-cd15-4bab-ac0d-7731db5a26c7] 2025-08-29 14:07:33.595730 | orchestrator | 14:07:33.595 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=8d4c2d77-38a8-4e70-8dcf-48e237e577e8] 2025-08-29 14:07:34.177226 | orchestrator | 14:07:34.177 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=f70d649e-4ec7-4249-8223-193a765bc6dc] 2025-08-29 14:07:34.578121 | orchestrator | 14:07:34.577 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 2s [id=581da919-144d-4653-ac42-e872459a0cc5] 2025-08-29 14:07:34.592138 | orchestrator | 14:07:34.592 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-08-29 14:07:36.916084 | orchestrator | 14:07:36.915 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=34603090-c146-4151-9356-33e1f81df516] 2025-08-29 14:07:36.949927 | orchestrator | 14:07:36.949 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=a59956ee-14fc-4c64-8315-f5435014482a] 2025-08-29 14:07:36.975389 | orchestrator | 14:07:36.974 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=198780c2-b0aa-4267-81d7-dd433498eb4e] 2025-08-29 14:07:36.990982 | orchestrator | 14:07:36.990 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=26a03f40-a287-4201-85ef-dae46b1b8ac7] 2025-08-29 14:07:36.993048 | orchestrator | 14:07:36.992 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=eca37a69-3c0f-4357-9670-f9669d9e69b8] 2025-08-29 14:07:37.146544 | orchestrator | 14:07:37.146 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=12058b5d-7e0f-4769-b570-e8724a20121a] 2025-08-29 14:07:38.133421 | orchestrator | 14:07:38.132 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 3s [id=f6444b44-cf7b-4074-b54f-efe4812f0f4d] 2025-08-29 14:07:38.141135 | orchestrator | 14:07:38.140 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-08-29 14:07:38.141217 | orchestrator | 14:07:38.140 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-08-29 14:07:38.142244 | orchestrator | 14:07:38.141 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-08-29 14:07:38.323907 | orchestrator | 14:07:38.322 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=ad7a0ea4-a6c4-4fcf-8ba2-27e250d5321a] 2025-08-29 14:07:38.334652 | orchestrator | 14:07:38.334 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-08-29 14:07:38.334706 | orchestrator | 14:07:38.334 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-08-29 14:07:38.334784 | orchestrator | 14:07:38.334 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-08-29 14:07:38.338224 | orchestrator | 14:07:38.338 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-08-29 14:07:38.339367 | orchestrator | 14:07:38.339 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-08-29 14:07:38.339506 | orchestrator | 14:07:38.339 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-08-29 14:07:38.448822 | orchestrator | 14:07:38.448 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=3c7c1906-9562-4e86-a01d-6e4d1bb71885] 2025-08-29 14:07:38.454008 | orchestrator | 14:07:38.453 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-08-29 14:07:38.458012 | orchestrator | 14:07:38.455 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-08-29 14:07:38.464616 | orchestrator | 14:07:38.462 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-08-29 14:07:38.587200 | orchestrator | 14:07:38.586 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=ed5bc063-c470-4af7-a33c-08bb171b43da] 2025-08-29 14:07:38.593627 | orchestrator | 14:07:38.593 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-08-29 14:07:38.598241 | orchestrator | 14:07:38.597 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=68868733-2f3a-43ca-b8ec-6076cc2051ff] 2025-08-29 14:07:38.609506 | orchestrator | 14:07:38.609 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-08-29 14:07:38.734850 | orchestrator | 14:07:38.734 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=938973ed-cfa0-476a-b5a9-ee2146adf460] 2025-08-29 14:07:38.746606 | orchestrator | 14:07:38.746 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-08-29 14:07:38.878888 | orchestrator | 14:07:38.878 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=a89e3b4f-9038-40ae-8c34-b0c7dbffa52d] 2025-08-29 14:07:38.888325 | orchestrator | 14:07:38.888 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-08-29 14:07:38.960245 | orchestrator | 14:07:38.959 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=13c81560-1ded-452f-922c-efc26c15923c] 2025-08-29 14:07:38.971789 | orchestrator | 14:07:38.971 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-08-29 14:07:39.083080 | orchestrator | 14:07:39.082 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=c94b015d-cb6c-4acd-9d7e-ae9bdb95aacd] 2025-08-29 14:07:39.093498 | orchestrator | 14:07:39.093 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-08-29 14:07:39.104462 | orchestrator | 14:07:39.104 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=4e0105cf-3e12-4ea7-99c4-3a77034657d8] 2025-08-29 14:07:39.122585 | orchestrator | 14:07:39.122 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-08-29 14:07:39.260159 | orchestrator | 14:07:39.259 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=db557236-aaf0-4e1c-9aab-1acee3080f13] 2025-08-29 14:07:39.344818 | orchestrator | 14:07:39.344 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 0s [id=d5db6d49-72e1-4526-9fdb-c621f4e3aa33] 2025-08-29 14:07:39.511129 | orchestrator | 14:07:39.510 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 2s [id=4919fb37-095d-4556-b9a5-79b27dadcf2b] 2025-08-29 14:07:39.553477 | orchestrator | 14:07:39.553 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=cb9004e6-ec2a-46ed-b09f-6fb0fe5080bd] 2025-08-29 14:07:39.762911 | orchestrator | 14:07:39.762 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=3b451fa1-62d8-434a-bd65-3104752597f9] 2025-08-29 14:07:39.831924 | orchestrator | 14:07:39.831 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=407e378d-9056-4ea9-b909-9301da48cb26] 2025-08-29 14:07:39.908861 | orchestrator | 14:07:39.908 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=b039e45b-8aeb-46cb-885c-4069cc65053f] 2025-08-29 14:07:39.935256 | orchestrator | 14:07:39.935 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=1a42ce6d-00d7-4979-bf63-a6a515facda8] 2025-08-29 14:07:39.986166 | orchestrator | 14:07:39.985 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=3eaf0dd8-86e3-49f1-a820-72552b85b3a9] 2025-08-29 14:07:41.581824 | orchestrator | 14:07:41.581 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 4s [id=4acaf4c4-5cb4-4592-880e-0028ab698c34] 2025-08-29 14:07:41.595071 | orchestrator | 14:07:41.594 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-08-29 14:07:41.610696 | orchestrator | 14:07:41.610 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-08-29 14:07:41.612933 | orchestrator | 14:07:41.612 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-08-29 14:07:41.627918 | orchestrator | 14:07:41.626 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-08-29 14:07:41.630935 | orchestrator | 14:07:41.630 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-08-29 14:07:41.633354 | orchestrator | 14:07:41.633 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-08-29 14:07:41.651713 | orchestrator | 14:07:41.651 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-08-29 14:07:43.070303 | orchestrator | 14:07:43.070 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 1s [id=1aa8f629-1281-484b-b3a3-80bfc7412d60] 2025-08-29 14:07:43.094263 | orchestrator | 14:07:43.094 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-08-29 14:07:43.102277 | orchestrator | 14:07:43.102 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-08-29 14:07:43.106686 | orchestrator | 14:07:43.106 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=98979c7e4556a4db47b901b800a0002793acb55b] 2025-08-29 14:07:43.107543 | orchestrator | 14:07:43.107 STDOUT terraform: local_file.inventory: Creating... 2025-08-29 14:07:43.111146 | orchestrator | 14:07:43.111 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=56bc032664212f7d3af0eb607338bae1ccab65fd] 2025-08-29 14:07:44.568766 | orchestrator | 14:07:44.568 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 2s [id=1aa8f629-1281-484b-b3a3-80bfc7412d60] 2025-08-29 14:07:51.611620 | orchestrator | 14:07:51.611 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-08-29 14:07:51.620818 | orchestrator | 14:07:51.620 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-08-29 14:07:51.629290 | orchestrator | 14:07:51.629 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-08-29 14:07:51.632958 | orchestrator | 14:07:51.632 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-08-29 14:07:51.638270 | orchestrator | 14:07:51.637 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-08-29 14:07:51.651146 | orchestrator | 14:07:51.650 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-08-29 14:08:01.613698 | orchestrator | 14:08:01.613 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-08-29 14:08:01.622049 | orchestrator | 14:08:01.621 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-08-29 14:08:01.630618 | orchestrator | 14:08:01.630 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-08-29 14:08:01.633596 | orchestrator | 14:08:01.633 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-08-29 14:08:01.639069 | orchestrator | 14:08:01.638 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-08-29 14:08:01.651375 | orchestrator | 14:08:01.651 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-08-29 14:08:02.198994 | orchestrator | 14:08:02.198 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 20s [id=0e956b80-5e0c-460c-8f41-36f288388838] 2025-08-29 14:08:04.415805 | orchestrator | 14:08:04.415 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 22s [id=341efbce-ffed-4024-993f-4a7606abcf9f] 2025-08-29 14:08:04.416116 | orchestrator | 14:08:04.415 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 22s [id=1a7f4bfd-effc-4e66-a2dd-87b839398275] 2025-08-29 14:08:04.416130 | orchestrator | 14:08:04.416 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 22s [id=1e52bfc8-dbe5-40c5-ade2-9d9a32b3f8ef] 2025-08-29 14:08:11.635878 | orchestrator | 14:08:11.635 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-08-29 14:08:11.640087 | orchestrator | 14:08:11.639 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-08-29 14:08:12.510403 | orchestrator | 14:08:12.510 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=703f948a-e0fe-4675-ba92-2b4bff65d911] 2025-08-29 14:08:12.686454 | orchestrator | 14:08:12.685 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=636a8fc3-0f99-4127-9737-c9e8919df96b] 2025-08-29 14:08:12.706828 | orchestrator | 14:08:12.706 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-08-29 14:08:12.712641 | orchestrator | 14:08:12.712 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=2162935654383696424] 2025-08-29 14:08:12.718612 | orchestrator | 14:08:12.718 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-08-29 14:08:12.730580 | orchestrator | 14:08:12.730 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-08-29 14:08:12.731773 | orchestrator | 14:08:12.731 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-08-29 14:08:12.738331 | orchestrator | 14:08:12.738 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-08-29 14:08:12.738408 | orchestrator | 14:08:12.738 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-08-29 14:08:12.745014 | orchestrator | 14:08:12.744 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-08-29 14:08:12.747912 | orchestrator | 14:08:12.747 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-08-29 14:08:12.759369 | orchestrator | 14:08:12.759 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-08-29 14:08:12.767500 | orchestrator | 14:08:12.767 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-08-29 14:08:12.772428 | orchestrator | 14:08:12.772 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-08-29 14:08:16.397763 | orchestrator | 14:08:16.397 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 3s [id=341efbce-ffed-4024-993f-4a7606abcf9f/34b7b0aa-9c3f-4af7-b9a4-6261675e7012] 2025-08-29 14:08:16.412723 | orchestrator | 14:08:16.412 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 3s [id=1e52bfc8-dbe5-40c5-ade2-9d9a32b3f8ef/8d4c2d77-38a8-4e70-8dcf-48e237e577e8] 2025-08-29 14:08:16.490603 | orchestrator | 14:08:16.489 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 3s [id=1e52bfc8-dbe5-40c5-ade2-9d9a32b3f8ef/54964cbc-4c5d-4365-aa24-d13bcc6e495a] 2025-08-29 14:08:16.493056 | orchestrator | 14:08:16.492 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 3s [id=1a7f4bfd-effc-4e66-a2dd-87b839398275/d4d11aa1-e648-4125-bb7f-b16cf1114c9f] 2025-08-29 14:08:16.524475 | orchestrator | 14:08:16.524 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 4s [id=1a7f4bfd-effc-4e66-a2dd-87b839398275/ce9b0281-40bf-44ad-b4c6-e5b614e2c1c9] 2025-08-29 14:08:16.583290 | orchestrator | 14:08:16.582 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 4s [id=341efbce-ffed-4024-993f-4a7606abcf9f/b50f501b-7dcc-49bb-af34-bcea70be6a61] 2025-08-29 14:08:22.608427 | orchestrator | 14:08:22.607 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 10s [id=1a7f4bfd-effc-4e66-a2dd-87b839398275/fa9350c4-64bc-4afb-b502-f801a6f70a24] 2025-08-29 14:08:22.719709 | orchestrator | 14:08:22.719 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 10s [id=1e52bfc8-dbe5-40c5-ade2-9d9a32b3f8ef/00b08f76-6c14-40db-8d96-1843b494176b] 2025-08-29 14:08:22.732189 | orchestrator | 14:08:22.731 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Still creating... [10s elapsed] 2025-08-29 14:08:22.754814 | orchestrator | 14:08:22.754 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 10s [id=341efbce-ffed-4024-993f-4a7606abcf9f/8e840163-cd15-4bab-ac0d-7731db5a26c7] 2025-08-29 14:08:22.769927 | orchestrator | 14:08:22.769 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-08-29 14:08:32.771121 | orchestrator | 14:08:32.770 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-08-29 14:08:33.150440 | orchestrator | 14:08:33.149 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=ebfd2d66-2511-42dd-9324-c8f158ebcafb] 2025-08-29 14:08:33.178231 | orchestrator | 14:08:33.177 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-08-29 14:08:33.178317 | orchestrator | 14:08:33.178 STDOUT terraform: Outputs: 2025-08-29 14:08:33.178332 | orchestrator | 14:08:33.178 STDOUT terraform: manager_address = 2025-08-29 14:08:33.178344 | orchestrator | 14:08:33.178 STDOUT terraform: private_key = 2025-08-29 14:08:33.657247 | orchestrator | ok: Runtime: 0:01:13.661643 2025-08-29 14:08:33.698297 | 2025-08-29 14:08:33.698550 | TASK [Create infrastructure (stable)] 2025-08-29 14:08:34.236701 | orchestrator | skipping: Conditional result was False 2025-08-29 14:08:34.246036 | 2025-08-29 14:08:34.246161 | TASK [Fetch manager address] 2025-08-29 14:08:34.681079 | orchestrator | ok 2025-08-29 14:08:34.692660 | 2025-08-29 14:08:34.692815 | TASK [Set manager_host address] 2025-08-29 14:08:34.768345 | orchestrator | ok 2025-08-29 14:08:34.777676 | 2025-08-29 14:08:34.777795 | LOOP [Update ansible collections] 2025-08-29 14:08:37.433204 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-08-29 14:08:37.433786 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-08-29 14:08:37.433877 | orchestrator | Starting galaxy collection install process 2025-08-29 14:08:37.433930 | orchestrator | Process install dependency map 2025-08-29 14:08:37.433973 | orchestrator | Starting collection install process 2025-08-29 14:08:37.434009 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons' 2025-08-29 14:08:37.434068 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons 2025-08-29 14:08:37.434135 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-08-29 14:08:37.434230 | orchestrator | ok: Item: commons Runtime: 0:00:02.314860 2025-08-29 14:08:38.292520 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-08-29 14:08:38.292726 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-08-29 14:08:38.293406 | orchestrator | Starting galaxy collection install process 2025-08-29 14:08:38.293482 | orchestrator | Process install dependency map 2025-08-29 14:08:38.293523 | orchestrator | Starting collection install process 2025-08-29 14:08:38.293558 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services' 2025-08-29 14:08:38.293594 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services 2025-08-29 14:08:38.293627 | orchestrator | osism.services:999.0.0 was installed successfully 2025-08-29 14:08:38.293682 | orchestrator | ok: Item: services Runtime: 0:00:00.597233 2025-08-29 14:08:38.308296 | 2025-08-29 14:08:38.308423 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-08-29 14:08:48.904241 | orchestrator | ok 2025-08-29 14:08:48.914478 | 2025-08-29 14:08:48.914601 | TASK [Wait a little longer for the manager so that everything is ready] 2025-08-29 14:09:48.949044 | orchestrator | ok 2025-08-29 14:09:48.956745 | 2025-08-29 14:09:48.956857 | TASK [Fetch manager ssh hostkey] 2025-08-29 14:09:50.529723 | orchestrator | Output suppressed because no_log was given 2025-08-29 14:09:50.540694 | 2025-08-29 14:09:50.540835 | TASK [Get ssh keypair from terraform environment] 2025-08-29 14:09:51.076587 | orchestrator | ok: Runtime: 0:00:00.006832 2025-08-29 14:09:51.091240 | 2025-08-29 14:09:51.091372 | TASK [Point out that the following task takes some time and does not give any output] 2025-08-29 14:09:51.129398 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-08-29 14:09:51.138692 | 2025-08-29 14:09:51.138790 | TASK [Run manager part 0] 2025-08-29 14:09:52.561748 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-08-29 14:09:52.622338 | orchestrator | 2025-08-29 14:09:52.622511 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-08-29 14:09:52.622526 | orchestrator | 2025-08-29 14:09:52.622540 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-08-29 14:09:54.549788 | orchestrator | ok: [testbed-manager] 2025-08-29 14:09:54.549851 | orchestrator | 2025-08-29 14:09:54.549874 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-08-29 14:09:54.549884 | orchestrator | 2025-08-29 14:09:54.549894 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 14:09:56.571800 | orchestrator | ok: [testbed-manager] 2025-08-29 14:09:56.571908 | orchestrator | 2025-08-29 14:09:56.571918 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-08-29 14:09:57.264984 | orchestrator | ok: [testbed-manager] 2025-08-29 14:09:57.265038 | orchestrator | 2025-08-29 14:09:57.265049 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-08-29 14:09:57.320386 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:09:57.320426 | orchestrator | 2025-08-29 14:09:57.320452 | orchestrator | TASK [Update package cache] **************************************************** 2025-08-29 14:09:57.343784 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:09:57.343826 | orchestrator | 2025-08-29 14:09:57.343833 | orchestrator | TASK [Install required packages] *********************************************** 2025-08-29 14:09:57.374176 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:09:57.374214 | orchestrator | 2025-08-29 14:09:57.374223 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-08-29 14:09:57.397379 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:09:57.397418 | orchestrator | 2025-08-29 14:09:57.397425 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-08-29 14:09:57.429007 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:09:57.429069 | orchestrator | 2025-08-29 14:09:57.429082 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-08-29 14:09:57.473743 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:09:57.473791 | orchestrator | 2025-08-29 14:09:57.473800 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-08-29 14:09:57.505007 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:09:57.505062 | orchestrator | 2025-08-29 14:09:57.505073 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-08-29 14:09:58.226484 | orchestrator | changed: [testbed-manager] 2025-08-29 14:09:58.226631 | orchestrator | 2025-08-29 14:09:58.226644 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-08-29 14:12:35.465685 | orchestrator | changed: [testbed-manager] 2025-08-29 14:12:35.465965 | orchestrator | 2025-08-29 14:12:35.465993 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-08-29 14:13:55.680513 | orchestrator | changed: [testbed-manager] 2025-08-29 14:13:55.680598 | orchestrator | 2025-08-29 14:13:55.680614 | orchestrator | TASK [Install required packages] *********************************************** 2025-08-29 14:14:16.270430 | orchestrator | changed: [testbed-manager] 2025-08-29 14:14:16.270525 | orchestrator | 2025-08-29 14:14:16.270544 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-08-29 14:14:25.529230 | orchestrator | changed: [testbed-manager] 2025-08-29 14:14:25.529313 | orchestrator | 2025-08-29 14:14:25.529329 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-08-29 14:14:25.585073 | orchestrator | ok: [testbed-manager] 2025-08-29 14:14:25.585153 | orchestrator | 2025-08-29 14:14:25.585325 | orchestrator | TASK [Get current user] ******************************************************** 2025-08-29 14:14:26.406824 | orchestrator | ok: [testbed-manager] 2025-08-29 14:14:26.406916 | orchestrator | 2025-08-29 14:14:26.406935 | orchestrator | TASK [Create venv directory] *************************************************** 2025-08-29 14:14:27.159010 | orchestrator | changed: [testbed-manager] 2025-08-29 14:14:27.159061 | orchestrator | 2025-08-29 14:14:27.159070 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-08-29 14:14:33.818540 | orchestrator | changed: [testbed-manager] 2025-08-29 14:14:33.818622 | orchestrator | 2025-08-29 14:14:33.818660 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-08-29 14:14:40.162990 | orchestrator | changed: [testbed-manager] 2025-08-29 14:14:40.163039 | orchestrator | 2025-08-29 14:14:40.163052 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-08-29 14:14:44.008493 | orchestrator | changed: [testbed-manager] 2025-08-29 14:14:44.008559 | orchestrator | 2025-08-29 14:14:44.008574 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-08-29 14:14:46.040970 | orchestrator | changed: [testbed-manager] 2025-08-29 14:14:46.041038 | orchestrator | 2025-08-29 14:14:46.041054 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-08-29 14:14:47.121674 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-08-29 14:14:47.121707 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-08-29 14:14:47.121713 | orchestrator | 2025-08-29 14:14:47.121720 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-08-29 14:14:47.198431 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-08-29 14:14:47.198468 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-08-29 14:14:47.198473 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-08-29 14:14:47.198477 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-08-29 14:14:57.686848 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-08-29 14:14:57.686920 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-08-29 14:14:57.686932 | orchestrator | 2025-08-29 14:14:57.686942 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-08-29 14:14:58.272149 | orchestrator | changed: [testbed-manager] 2025-08-29 14:14:58.272258 | orchestrator | 2025-08-29 14:14:58.272275 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-08-29 14:17:21.605009 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-08-29 14:17:21.605232 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-08-29 14:17:21.605245 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-08-29 14:17:21.605250 | orchestrator | 2025-08-29 14:17:21.605256 | orchestrator | TASK [Install local collections] *********************************************** 2025-08-29 14:17:23.947125 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-08-29 14:17:23.947216 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-08-29 14:17:23.947222 | orchestrator | 2025-08-29 14:17:23.947227 | orchestrator | PLAY [Create operator user] **************************************************** 2025-08-29 14:17:23.947232 | orchestrator | 2025-08-29 14:17:23.947237 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 14:17:25.338620 | orchestrator | ok: [testbed-manager] 2025-08-29 14:17:25.338653 | orchestrator | 2025-08-29 14:17:25.338662 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-08-29 14:17:25.380681 | orchestrator | ok: [testbed-manager] 2025-08-29 14:17:25.380715 | orchestrator | 2025-08-29 14:17:25.380721 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-08-29 14:17:25.445899 | orchestrator | ok: [testbed-manager] 2025-08-29 14:17:25.445941 | orchestrator | 2025-08-29 14:17:25.445950 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-08-29 14:17:26.221961 | orchestrator | changed: [testbed-manager] 2025-08-29 14:17:26.222209 | orchestrator | 2025-08-29 14:17:26.222222 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-08-29 14:17:26.976685 | orchestrator | changed: [testbed-manager] 2025-08-29 14:17:26.976727 | orchestrator | 2025-08-29 14:17:26.976736 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-08-29 14:17:28.321607 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-08-29 14:17:28.321673 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-08-29 14:17:28.321687 | orchestrator | 2025-08-29 14:17:28.321710 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-08-29 14:17:29.733682 | orchestrator | changed: [testbed-manager] 2025-08-29 14:17:29.733768 | orchestrator | 2025-08-29 14:17:29.733782 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-08-29 14:17:31.513236 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-08-29 14:17:31.513269 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-08-29 14:17:31.513276 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-08-29 14:17:31.513281 | orchestrator | 2025-08-29 14:17:31.513288 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-08-29 14:17:31.567837 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:17:31.567880 | orchestrator | 2025-08-29 14:17:31.567892 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-08-29 14:17:32.155120 | orchestrator | changed: [testbed-manager] 2025-08-29 14:17:32.155220 | orchestrator | 2025-08-29 14:17:32.155238 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-08-29 14:17:32.221880 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:17:32.221917 | orchestrator | 2025-08-29 14:17:32.221924 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-08-29 14:17:33.043315 | orchestrator | changed: [testbed-manager] => (item=None) 2025-08-29 14:17:33.043385 | orchestrator | changed: [testbed-manager] 2025-08-29 14:17:33.043401 | orchestrator | 2025-08-29 14:17:33.043414 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-08-29 14:17:33.075624 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:17:33.075681 | orchestrator | 2025-08-29 14:17:33.075694 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-08-29 14:17:33.105904 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:17:33.105979 | orchestrator | 2025-08-29 14:17:33.105997 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-08-29 14:17:33.141597 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:17:33.141657 | orchestrator | 2025-08-29 14:17:33.141671 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-08-29 14:17:33.193911 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:17:33.193970 | orchestrator | 2025-08-29 14:17:33.193988 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-08-29 14:17:33.909695 | orchestrator | ok: [testbed-manager] 2025-08-29 14:17:33.909768 | orchestrator | 2025-08-29 14:17:33.909787 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-08-29 14:17:33.909800 | orchestrator | 2025-08-29 14:17:33.909811 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 14:17:35.309228 | orchestrator | ok: [testbed-manager] 2025-08-29 14:17:35.309314 | orchestrator | 2025-08-29 14:17:35.309331 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-08-29 14:17:36.283550 | orchestrator | changed: [testbed-manager] 2025-08-29 14:17:36.283635 | orchestrator | 2025-08-29 14:17:36.283651 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:17:36.283664 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-08-29 14:17:36.283676 | orchestrator | 2025-08-29 14:17:36.476189 | orchestrator | ok: Runtime: 0:07:44.912504 2025-08-29 14:17:36.493754 | 2025-08-29 14:17:36.494919 | TASK [Point out that the log in on the manager is now possible] 2025-08-29 14:17:36.544234 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-08-29 14:17:36.554483 | 2025-08-29 14:17:36.554606 | TASK [Point out that the following task takes some time and does not give any output] 2025-08-29 14:17:36.589286 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-08-29 14:17:36.598114 | 2025-08-29 14:17:36.598226 | TASK [Run manager part 1 + 2] 2025-08-29 14:17:38.049681 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-08-29 14:17:38.142681 | orchestrator | 2025-08-29 14:17:38.142791 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-08-29 14:17:38.142811 | orchestrator | 2025-08-29 14:17:38.142848 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 14:17:40.915879 | orchestrator | ok: [testbed-manager] 2025-08-29 14:17:40.916008 | orchestrator | 2025-08-29 14:17:40.916060 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-08-29 14:17:40.956849 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:17:40.956915 | orchestrator | 2025-08-29 14:17:40.956926 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-08-29 14:17:40.994558 | orchestrator | ok: [testbed-manager] 2025-08-29 14:17:40.994617 | orchestrator | 2025-08-29 14:17:40.994627 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-08-29 14:17:41.031187 | orchestrator | ok: [testbed-manager] 2025-08-29 14:17:41.031234 | orchestrator | 2025-08-29 14:17:41.031241 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-08-29 14:17:41.097440 | orchestrator | ok: [testbed-manager] 2025-08-29 14:17:41.097491 | orchestrator | 2025-08-29 14:17:41.097500 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-08-29 14:17:41.154974 | orchestrator | ok: [testbed-manager] 2025-08-29 14:17:41.155019 | orchestrator | 2025-08-29 14:17:41.155025 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-08-29 14:17:41.195984 | orchestrator | included: /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-08-29 14:17:41.196027 | orchestrator | 2025-08-29 14:17:41.196032 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-08-29 14:17:41.892195 | orchestrator | ok: [testbed-manager] 2025-08-29 14:17:41.892278 | orchestrator | 2025-08-29 14:17:41.892296 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-08-29 14:17:41.933951 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:17:41.934015 | orchestrator | 2025-08-29 14:17:41.934056 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-08-29 14:17:43.280986 | orchestrator | changed: [testbed-manager] 2025-08-29 14:17:43.281055 | orchestrator | 2025-08-29 14:17:43.281074 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-08-29 14:17:43.863516 | orchestrator | ok: [testbed-manager] 2025-08-29 14:17:43.863592 | orchestrator | 2025-08-29 14:17:43.863608 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-08-29 14:17:45.037476 | orchestrator | changed: [testbed-manager] 2025-08-29 14:17:45.037547 | orchestrator | 2025-08-29 14:17:45.037564 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-08-29 14:18:02.584849 | orchestrator | changed: [testbed-manager] 2025-08-29 14:18:02.584933 | orchestrator | 2025-08-29 14:18:02.584948 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-08-29 14:18:03.296412 | orchestrator | ok: [testbed-manager] 2025-08-29 14:18:03.297218 | orchestrator | 2025-08-29 14:18:03.297248 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-08-29 14:18:03.354262 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:18:03.354334 | orchestrator | 2025-08-29 14:18:03.354349 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-08-29 14:18:04.331631 | orchestrator | changed: [testbed-manager] 2025-08-29 14:18:04.331720 | orchestrator | 2025-08-29 14:18:04.331737 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-08-29 14:18:05.316323 | orchestrator | changed: [testbed-manager] 2025-08-29 14:18:05.316415 | orchestrator | 2025-08-29 14:18:05.316431 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-08-29 14:18:05.891513 | orchestrator | changed: [testbed-manager] 2025-08-29 14:18:05.891601 | orchestrator | 2025-08-29 14:18:05.891616 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-08-29 14:18:05.935740 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-08-29 14:18:05.936009 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-08-29 14:18:05.936033 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-08-29 14:18:05.936046 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-08-29 14:18:10.416371 | orchestrator | changed: [testbed-manager] 2025-08-29 14:18:10.416441 | orchestrator | 2025-08-29 14:18:10.416451 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-08-29 14:18:20.301052 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-08-29 14:18:20.301158 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-08-29 14:18:20.301176 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-08-29 14:18:20.301189 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-08-29 14:18:20.301209 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-08-29 14:18:20.301221 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-08-29 14:18:20.301232 | orchestrator | 2025-08-29 14:18:20.301245 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-08-29 14:18:21.398654 | orchestrator | changed: [testbed-manager] 2025-08-29 14:18:21.398696 | orchestrator | 2025-08-29 14:18:21.398704 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-08-29 14:18:21.442089 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:18:21.442192 | orchestrator | 2025-08-29 14:18:21.442202 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-08-29 14:18:24.669734 | orchestrator | changed: [testbed-manager] 2025-08-29 14:18:24.669814 | orchestrator | 2025-08-29 14:18:24.669829 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-08-29 14:18:24.703339 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:18:24.703392 | orchestrator | 2025-08-29 14:18:24.703399 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-08-29 14:20:07.209863 | orchestrator | changed: [testbed-manager] 2025-08-29 14:20:07.210012 | orchestrator | 2025-08-29 14:20:07.210105 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-08-29 14:20:08.429476 | orchestrator | ok: [testbed-manager] 2025-08-29 14:20:08.429509 | orchestrator | 2025-08-29 14:20:08.429518 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:20:08.429524 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-08-29 14:20:08.429530 | orchestrator | 2025-08-29 14:20:08.730524 | orchestrator | ok: Runtime: 0:02:31.584825 2025-08-29 14:20:08.750467 | 2025-08-29 14:20:08.750660 | TASK [Reboot manager] 2025-08-29 14:20:10.300536 | orchestrator | ok: Runtime: 0:00:00.974498 2025-08-29 14:20:10.318474 | 2025-08-29 14:20:10.318677 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-08-29 14:20:26.716802 | orchestrator | ok 2025-08-29 14:20:26.727851 | 2025-08-29 14:20:26.727975 | TASK [Wait a little longer for the manager so that everything is ready] 2025-08-29 14:21:26.772852 | orchestrator | ok 2025-08-29 14:21:26.783731 | 2025-08-29 14:21:26.783851 | TASK [Deploy manager + bootstrap nodes] 2025-08-29 14:21:29.365875 | orchestrator | 2025-08-29 14:21:29.366100 | orchestrator | # DEPLOY MANAGER 2025-08-29 14:21:29.366126 | orchestrator | 2025-08-29 14:21:29.366140 | orchestrator | + set -e 2025-08-29 14:21:29.366152 | orchestrator | + echo 2025-08-29 14:21:29.366165 | orchestrator | + echo '# DEPLOY MANAGER' 2025-08-29 14:21:29.366181 | orchestrator | + echo 2025-08-29 14:21:29.366224 | orchestrator | + cat /opt/manager-vars.sh 2025-08-29 14:21:29.369629 | orchestrator | export NUMBER_OF_NODES=6 2025-08-29 14:21:29.369649 | orchestrator | 2025-08-29 14:21:29.369660 | orchestrator | export CEPH_VERSION=reef 2025-08-29 14:21:29.369671 | orchestrator | export CONFIGURATION_VERSION=main 2025-08-29 14:21:29.369682 | orchestrator | export MANAGER_VERSION=latest 2025-08-29 14:21:29.369700 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-08-29 14:21:29.369710 | orchestrator | 2025-08-29 14:21:29.369726 | orchestrator | export ARA=false 2025-08-29 14:21:29.369736 | orchestrator | export DEPLOY_MODE=manager 2025-08-29 14:21:29.369751 | orchestrator | export TEMPEST=false 2025-08-29 14:21:29.369761 | orchestrator | export IS_ZUUL=true 2025-08-29 14:21:29.369776 | orchestrator | 2025-08-29 14:21:29.369800 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.226 2025-08-29 14:21:29.369817 | orchestrator | export EXTERNAL_API=false 2025-08-29 14:21:29.369832 | orchestrator | 2025-08-29 14:21:29.369847 | orchestrator | export IMAGE_USER=ubuntu 2025-08-29 14:21:29.369865 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-08-29 14:21:29.369880 | orchestrator | 2025-08-29 14:21:29.369896 | orchestrator | export CEPH_STACK=ceph-ansible 2025-08-29 14:21:29.369921 | orchestrator | 2025-08-29 14:21:29.369935 | orchestrator | + echo 2025-08-29 14:21:29.369945 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-08-29 14:21:29.371043 | orchestrator | ++ export INTERACTIVE=false 2025-08-29 14:21:29.371062 | orchestrator | ++ INTERACTIVE=false 2025-08-29 14:21:29.371073 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-08-29 14:21:29.371084 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-08-29 14:21:29.371384 | orchestrator | + source /opt/manager-vars.sh 2025-08-29 14:21:29.371398 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-08-29 14:21:29.371408 | orchestrator | ++ NUMBER_OF_NODES=6 2025-08-29 14:21:29.371510 | orchestrator | ++ export CEPH_VERSION=reef 2025-08-29 14:21:29.371523 | orchestrator | ++ CEPH_VERSION=reef 2025-08-29 14:21:29.371533 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-08-29 14:21:29.371543 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-08-29 14:21:29.371553 | orchestrator | ++ export MANAGER_VERSION=latest 2025-08-29 14:21:29.371563 | orchestrator | ++ MANAGER_VERSION=latest 2025-08-29 14:21:29.371573 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-08-29 14:21:29.371590 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-08-29 14:21:29.371604 | orchestrator | ++ export ARA=false 2025-08-29 14:21:29.371614 | orchestrator | ++ ARA=false 2025-08-29 14:21:29.371623 | orchestrator | ++ export DEPLOY_MODE=manager 2025-08-29 14:21:29.371633 | orchestrator | ++ DEPLOY_MODE=manager 2025-08-29 14:21:29.371643 | orchestrator | ++ export TEMPEST=false 2025-08-29 14:21:29.371652 | orchestrator | ++ TEMPEST=false 2025-08-29 14:21:29.371662 | orchestrator | ++ export IS_ZUUL=true 2025-08-29 14:21:29.371672 | orchestrator | ++ IS_ZUUL=true 2025-08-29 14:21:29.371682 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.226 2025-08-29 14:21:29.371692 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.226 2025-08-29 14:21:29.371705 | orchestrator | ++ export EXTERNAL_API=false 2025-08-29 14:21:29.371715 | orchestrator | ++ EXTERNAL_API=false 2025-08-29 14:21:29.371725 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-08-29 14:21:29.371734 | orchestrator | ++ IMAGE_USER=ubuntu 2025-08-29 14:21:29.371744 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-08-29 14:21:29.371754 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-08-29 14:21:29.371764 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-08-29 14:21:29.371774 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-08-29 14:21:29.371786 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-08-29 14:21:29.429740 | orchestrator | + docker version 2025-08-29 14:21:29.687259 | orchestrator | Client: Docker Engine - Community 2025-08-29 14:21:29.687344 | orchestrator | Version: 27.5.1 2025-08-29 14:21:29.687356 | orchestrator | API version: 1.47 2025-08-29 14:21:29.687366 | orchestrator | Go version: go1.22.11 2025-08-29 14:21:29.687376 | orchestrator | Git commit: 9f9e405 2025-08-29 14:21:29.687386 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-08-29 14:21:29.687397 | orchestrator | OS/Arch: linux/amd64 2025-08-29 14:21:29.687407 | orchestrator | Context: default 2025-08-29 14:21:29.687416 | orchestrator | 2025-08-29 14:21:29.687426 | orchestrator | Server: Docker Engine - Community 2025-08-29 14:21:29.687436 | orchestrator | Engine: 2025-08-29 14:21:29.687446 | orchestrator | Version: 27.5.1 2025-08-29 14:21:29.687456 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-08-29 14:21:29.687491 | orchestrator | Go version: go1.22.11 2025-08-29 14:21:29.687501 | orchestrator | Git commit: 4c9b3b0 2025-08-29 14:21:29.687511 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-08-29 14:21:29.687520 | orchestrator | OS/Arch: linux/amd64 2025-08-29 14:21:29.687530 | orchestrator | Experimental: false 2025-08-29 14:21:29.687539 | orchestrator | containerd: 2025-08-29 14:21:29.687549 | orchestrator | Version: 1.7.27 2025-08-29 14:21:29.687559 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-08-29 14:21:29.687569 | orchestrator | runc: 2025-08-29 14:21:29.687589 | orchestrator | Version: 1.2.5 2025-08-29 14:21:29.687599 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-08-29 14:21:29.687609 | orchestrator | docker-init: 2025-08-29 14:21:29.687881 | orchestrator | Version: 0.19.0 2025-08-29 14:21:29.687898 | orchestrator | GitCommit: de40ad0 2025-08-29 14:21:29.691373 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-08-29 14:21:29.701739 | orchestrator | + set -e 2025-08-29 14:21:29.701756 | orchestrator | + source /opt/manager-vars.sh 2025-08-29 14:21:29.701767 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-08-29 14:21:29.701778 | orchestrator | ++ NUMBER_OF_NODES=6 2025-08-29 14:21:29.701788 | orchestrator | ++ export CEPH_VERSION=reef 2025-08-29 14:21:29.701798 | orchestrator | ++ CEPH_VERSION=reef 2025-08-29 14:21:29.701807 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-08-29 14:21:29.701817 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-08-29 14:21:29.701826 | orchestrator | ++ export MANAGER_VERSION=latest 2025-08-29 14:21:29.701836 | orchestrator | ++ MANAGER_VERSION=latest 2025-08-29 14:21:29.701845 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-08-29 14:21:29.701855 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-08-29 14:21:29.701865 | orchestrator | ++ export ARA=false 2025-08-29 14:21:29.701874 | orchestrator | ++ ARA=false 2025-08-29 14:21:29.701884 | orchestrator | ++ export DEPLOY_MODE=manager 2025-08-29 14:21:29.701893 | orchestrator | ++ DEPLOY_MODE=manager 2025-08-29 14:21:29.701903 | orchestrator | ++ export TEMPEST=false 2025-08-29 14:21:29.701912 | orchestrator | ++ TEMPEST=false 2025-08-29 14:21:29.701922 | orchestrator | ++ export IS_ZUUL=true 2025-08-29 14:21:29.701931 | orchestrator | ++ IS_ZUUL=true 2025-08-29 14:21:29.701941 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.226 2025-08-29 14:21:29.701951 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.226 2025-08-29 14:21:29.701960 | orchestrator | ++ export EXTERNAL_API=false 2025-08-29 14:21:29.701970 | orchestrator | ++ EXTERNAL_API=false 2025-08-29 14:21:29.701979 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-08-29 14:21:29.701989 | orchestrator | ++ IMAGE_USER=ubuntu 2025-08-29 14:21:29.702003 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-08-29 14:21:29.702012 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-08-29 14:21:29.702079 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-08-29 14:21:29.702089 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-08-29 14:21:29.702099 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-08-29 14:21:29.702109 | orchestrator | ++ export INTERACTIVE=false 2025-08-29 14:21:29.702118 | orchestrator | ++ INTERACTIVE=false 2025-08-29 14:21:29.702128 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-08-29 14:21:29.702141 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-08-29 14:21:29.702154 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-08-29 14:21:29.702164 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-08-29 14:21:29.702174 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2025-08-29 14:21:29.709925 | orchestrator | + set -e 2025-08-29 14:21:29.709943 | orchestrator | + VERSION=reef 2025-08-29 14:21:29.711229 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-08-29 14:21:29.718430 | orchestrator | + [[ -n ceph_version: reef ]] 2025-08-29 14:21:29.718447 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2025-08-29 14:21:29.724564 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2025-08-29 14:21:29.731775 | orchestrator | + set -e 2025-08-29 14:21:29.731790 | orchestrator | + VERSION=2024.2 2025-08-29 14:21:29.732799 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-08-29 14:21:29.737022 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2025-08-29 14:21:29.737056 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2025-08-29 14:21:29.742064 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-08-29 14:21:29.742929 | orchestrator | ++ semver latest 7.0.0 2025-08-29 14:21:29.808616 | orchestrator | + [[ -1 -ge 0 ]] 2025-08-29 14:21:29.808667 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-08-29 14:21:29.808681 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-08-29 14:21:29.808700 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-08-29 14:21:29.909416 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-08-29 14:21:29.914795 | orchestrator | + source /opt/venv/bin/activate 2025-08-29 14:21:29.915762 | orchestrator | ++ deactivate nondestructive 2025-08-29 14:21:29.915788 | orchestrator | ++ '[' -n '' ']' 2025-08-29 14:21:29.915799 | orchestrator | ++ '[' -n '' ']' 2025-08-29 14:21:29.915815 | orchestrator | ++ hash -r 2025-08-29 14:21:29.915827 | orchestrator | ++ '[' -n '' ']' 2025-08-29 14:21:29.915838 | orchestrator | ++ unset VIRTUAL_ENV 2025-08-29 14:21:29.915849 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-08-29 14:21:29.915988 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-08-29 14:21:29.916260 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-08-29 14:21:29.916277 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-08-29 14:21:29.916289 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-08-29 14:21:29.916300 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-08-29 14:21:29.916312 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-08-29 14:21:29.916323 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-08-29 14:21:29.916334 | orchestrator | ++ export PATH 2025-08-29 14:21:29.916349 | orchestrator | ++ '[' -n '' ']' 2025-08-29 14:21:29.916360 | orchestrator | ++ '[' -z '' ']' 2025-08-29 14:21:29.916468 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-08-29 14:21:29.916483 | orchestrator | ++ PS1='(venv) ' 2025-08-29 14:21:29.916494 | orchestrator | ++ export PS1 2025-08-29 14:21:29.916505 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-08-29 14:21:29.916516 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-08-29 14:21:29.916527 | orchestrator | ++ hash -r 2025-08-29 14:21:29.916552 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-08-29 14:21:31.321724 | orchestrator | 2025-08-29 14:21:31.321829 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-08-29 14:21:31.321846 | orchestrator | 2025-08-29 14:21:31.321857 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-08-29 14:21:31.895643 | orchestrator | ok: [testbed-manager] 2025-08-29 14:21:31.895756 | orchestrator | 2025-08-29 14:21:31.895774 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-08-29 14:21:32.932732 | orchestrator | changed: [testbed-manager] 2025-08-29 14:21:32.932851 | orchestrator | 2025-08-29 14:21:32.932867 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-08-29 14:21:32.932879 | orchestrator | 2025-08-29 14:21:32.932890 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 14:21:35.428999 | orchestrator | ok: [testbed-manager] 2025-08-29 14:21:35.429269 | orchestrator | 2025-08-29 14:21:35.429288 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-08-29 14:21:35.501076 | orchestrator | ok: [testbed-manager] 2025-08-29 14:21:35.501147 | orchestrator | 2025-08-29 14:21:35.501162 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-08-29 14:21:35.979598 | orchestrator | changed: [testbed-manager] 2025-08-29 14:21:35.979708 | orchestrator | 2025-08-29 14:21:35.979725 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-08-29 14:21:36.017295 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:21:36.017389 | orchestrator | 2025-08-29 14:21:36.017411 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-08-29 14:21:36.383865 | orchestrator | changed: [testbed-manager] 2025-08-29 14:21:36.383968 | orchestrator | 2025-08-29 14:21:36.383982 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-08-29 14:21:36.444187 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:21:36.444273 | orchestrator | 2025-08-29 14:21:36.444289 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-08-29 14:21:36.803162 | orchestrator | ok: [testbed-manager] 2025-08-29 14:21:36.803267 | orchestrator | 2025-08-29 14:21:36.803283 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-08-29 14:21:36.933535 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:21:36.933635 | orchestrator | 2025-08-29 14:21:36.933650 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-08-29 14:21:36.933663 | orchestrator | 2025-08-29 14:21:36.933677 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 14:21:38.656796 | orchestrator | ok: [testbed-manager] 2025-08-29 14:21:38.656899 | orchestrator | 2025-08-29 14:21:38.656916 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-08-29 14:21:38.762823 | orchestrator | included: osism.services.traefik for testbed-manager 2025-08-29 14:21:38.762901 | orchestrator | 2025-08-29 14:21:38.762914 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-08-29 14:21:38.819957 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-08-29 14:21:38.819995 | orchestrator | 2025-08-29 14:21:38.820007 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-08-29 14:21:39.940790 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-08-29 14:21:39.940899 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-08-29 14:21:39.940915 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-08-29 14:21:39.940928 | orchestrator | 2025-08-29 14:21:39.940942 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-08-29 14:21:41.897444 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-08-29 14:21:41.897551 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-08-29 14:21:41.897569 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-08-29 14:21:41.897582 | orchestrator | 2025-08-29 14:21:41.897594 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-08-29 14:21:42.601195 | orchestrator | changed: [testbed-manager] => (item=None) 2025-08-29 14:21:42.601300 | orchestrator | changed: [testbed-manager] 2025-08-29 14:21:42.601318 | orchestrator | 2025-08-29 14:21:42.601331 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-08-29 14:21:43.274582 | orchestrator | changed: [testbed-manager] => (item=None) 2025-08-29 14:21:43.274675 | orchestrator | changed: [testbed-manager] 2025-08-29 14:21:43.274690 | orchestrator | 2025-08-29 14:21:43.274700 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-08-29 14:21:43.342565 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:21:43.342622 | orchestrator | 2025-08-29 14:21:43.342635 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-08-29 14:21:43.734154 | orchestrator | ok: [testbed-manager] 2025-08-29 14:21:43.734255 | orchestrator | 2025-08-29 14:21:43.734269 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-08-29 14:21:43.823903 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-08-29 14:21:43.823998 | orchestrator | 2025-08-29 14:21:43.824012 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-08-29 14:21:44.891994 | orchestrator | changed: [testbed-manager] 2025-08-29 14:21:44.892159 | orchestrator | 2025-08-29 14:21:44.892179 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-08-29 14:21:45.732325 | orchestrator | changed: [testbed-manager] 2025-08-29 14:21:45.732433 | orchestrator | 2025-08-29 14:21:45.732451 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-08-29 14:21:58.584760 | orchestrator | changed: [testbed-manager] 2025-08-29 14:21:58.584858 | orchestrator | 2025-08-29 14:21:58.584875 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-08-29 14:21:58.639557 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:21:58.639624 | orchestrator | 2025-08-29 14:21:58.639639 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-08-29 14:21:58.639651 | orchestrator | 2025-08-29 14:21:58.639662 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 14:22:00.454204 | orchestrator | ok: [testbed-manager] 2025-08-29 14:22:00.454291 | orchestrator | 2025-08-29 14:22:00.454334 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-08-29 14:22:00.564664 | orchestrator | included: osism.services.manager for testbed-manager 2025-08-29 14:22:00.564744 | orchestrator | 2025-08-29 14:22:00.564773 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-08-29 14:22:00.626120 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-08-29 14:22:00.626192 | orchestrator | 2025-08-29 14:22:00.626206 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-08-29 14:22:03.335945 | orchestrator | ok: [testbed-manager] 2025-08-29 14:22:03.336081 | orchestrator | 2025-08-29 14:22:03.336101 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-08-29 14:22:03.392594 | orchestrator | ok: [testbed-manager] 2025-08-29 14:22:03.392644 | orchestrator | 2025-08-29 14:22:03.392658 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-08-29 14:22:03.540799 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-08-29 14:22:03.540836 | orchestrator | 2025-08-29 14:22:03.540848 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-08-29 14:22:06.574889 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-08-29 14:22:06.574979 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-08-29 14:22:06.574993 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-08-29 14:22:06.575005 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-08-29 14:22:06.575072 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-08-29 14:22:06.575084 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-08-29 14:22:06.575096 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-08-29 14:22:06.575107 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-08-29 14:22:06.575119 | orchestrator | 2025-08-29 14:22:06.575131 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-08-29 14:22:07.237767 | orchestrator | changed: [testbed-manager] 2025-08-29 14:22:07.237867 | orchestrator | 2025-08-29 14:22:07.237883 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-08-29 14:22:07.880615 | orchestrator | changed: [testbed-manager] 2025-08-29 14:22:07.880717 | orchestrator | 2025-08-29 14:22:07.880732 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-08-29 14:22:07.956939 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-08-29 14:22:07.957052 | orchestrator | 2025-08-29 14:22:07.957067 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-08-29 14:22:09.271333 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-08-29 14:22:09.271441 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-08-29 14:22:09.271457 | orchestrator | 2025-08-29 14:22:09.271471 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-08-29 14:22:09.933716 | orchestrator | changed: [testbed-manager] 2025-08-29 14:22:09.933818 | orchestrator | 2025-08-29 14:22:09.933833 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-08-29 14:22:09.991151 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:22:09.991234 | orchestrator | 2025-08-29 14:22:09.991248 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2025-08-29 14:22:10.046121 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:22:10.046211 | orchestrator | 2025-08-29 14:22:10.046226 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-08-29 14:22:10.115490 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-08-29 14:22:10.115603 | orchestrator | 2025-08-29 14:22:10.115626 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-08-29 14:22:11.541771 | orchestrator | changed: [testbed-manager] => (item=None) 2025-08-29 14:22:11.541877 | orchestrator | changed: [testbed-manager] => (item=None) 2025-08-29 14:22:11.541921 | orchestrator | changed: [testbed-manager] 2025-08-29 14:22:11.541935 | orchestrator | 2025-08-29 14:22:11.541947 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-08-29 14:22:12.200544 | orchestrator | changed: [testbed-manager] 2025-08-29 14:22:12.200655 | orchestrator | 2025-08-29 14:22:12.200673 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-08-29 14:22:12.259471 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:22:12.259554 | orchestrator | 2025-08-29 14:22:12.259568 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-08-29 14:22:12.358449 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-08-29 14:22:12.358530 | orchestrator | 2025-08-29 14:22:12.358543 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-08-29 14:22:12.935542 | orchestrator | changed: [testbed-manager] 2025-08-29 14:22:12.935634 | orchestrator | 2025-08-29 14:22:12.935647 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-08-29 14:22:13.361485 | orchestrator | changed: [testbed-manager] 2025-08-29 14:22:13.361576 | orchestrator | 2025-08-29 14:22:13.361592 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-08-29 14:22:14.657393 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-08-29 14:22:14.657494 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-08-29 14:22:14.657518 | orchestrator | 2025-08-29 14:22:14.657541 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-08-29 14:22:15.305994 | orchestrator | changed: [testbed-manager] 2025-08-29 14:22:15.306196 | orchestrator | 2025-08-29 14:22:15.306213 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-08-29 14:22:15.707859 | orchestrator | ok: [testbed-manager] 2025-08-29 14:22:15.707949 | orchestrator | 2025-08-29 14:22:15.707963 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-08-29 14:22:16.090668 | orchestrator | changed: [testbed-manager] 2025-08-29 14:22:16.090748 | orchestrator | 2025-08-29 14:22:16.090759 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-08-29 14:22:16.148057 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:22:16.148147 | orchestrator | 2025-08-29 14:22:16.148164 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-08-29 14:22:16.245099 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-08-29 14:22:16.245166 | orchestrator | 2025-08-29 14:22:16.245180 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-08-29 14:22:16.289468 | orchestrator | ok: [testbed-manager] 2025-08-29 14:22:16.289520 | orchestrator | 2025-08-29 14:22:16.289533 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-08-29 14:22:18.415296 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-08-29 14:22:18.415386 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-08-29 14:22:18.415399 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-08-29 14:22:18.415411 | orchestrator | 2025-08-29 14:22:18.415423 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-08-29 14:22:19.225817 | orchestrator | changed: [testbed-manager] 2025-08-29 14:22:19.225913 | orchestrator | 2025-08-29 14:22:19.225932 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-08-29 14:22:19.967576 | orchestrator | changed: [testbed-manager] 2025-08-29 14:22:19.967661 | orchestrator | 2025-08-29 14:22:19.967675 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-08-29 14:22:20.723139 | orchestrator | changed: [testbed-manager] 2025-08-29 14:22:20.723235 | orchestrator | 2025-08-29 14:22:20.723251 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-08-29 14:22:20.808939 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-08-29 14:22:20.809057 | orchestrator | 2025-08-29 14:22:20.809073 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-08-29 14:22:20.865713 | orchestrator | ok: [testbed-manager] 2025-08-29 14:22:20.865809 | orchestrator | 2025-08-29 14:22:20.865824 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-08-29 14:22:21.630522 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-08-29 14:22:21.630605 | orchestrator | 2025-08-29 14:22:21.630619 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-08-29 14:22:21.718619 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-08-29 14:22:21.718696 | orchestrator | 2025-08-29 14:22:21.718709 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-08-29 14:22:22.417429 | orchestrator | changed: [testbed-manager] 2025-08-29 14:22:22.417521 | orchestrator | 2025-08-29 14:22:22.417535 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-08-29 14:22:23.090403 | orchestrator | ok: [testbed-manager] 2025-08-29 14:22:23.090505 | orchestrator | 2025-08-29 14:22:23.090520 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-08-29 14:22:23.144853 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:22:23.144926 | orchestrator | 2025-08-29 14:22:23.144939 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-08-29 14:22:23.224167 | orchestrator | ok: [testbed-manager] 2025-08-29 14:22:23.224243 | orchestrator | 2025-08-29 14:22:23.224257 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-08-29 14:22:24.292578 | orchestrator | changed: [testbed-manager] 2025-08-29 14:22:24.293477 | orchestrator | 2025-08-29 14:22:24.293517 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-08-29 14:23:59.421328 | orchestrator | changed: [testbed-manager] 2025-08-29 14:23:59.421443 | orchestrator | 2025-08-29 14:23:59.421461 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-08-29 14:24:00.437499 | orchestrator | ok: [testbed-manager] 2025-08-29 14:24:00.437604 | orchestrator | 2025-08-29 14:24:00.437621 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-08-29 14:24:00.490757 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:24:00.490842 | orchestrator | 2025-08-29 14:24:00.490858 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-08-29 14:24:02.920898 | orchestrator | changed: [testbed-manager] 2025-08-29 14:24:02.921072 | orchestrator | 2025-08-29 14:24:02.921092 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-08-29 14:24:03.013118 | orchestrator | ok: [testbed-manager] 2025-08-29 14:24:03.013223 | orchestrator | 2025-08-29 14:24:03.013238 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-08-29 14:24:03.013251 | orchestrator | 2025-08-29 14:24:03.013262 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-08-29 14:24:03.058937 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:24:03.059055 | orchestrator | 2025-08-29 14:24:03.059066 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-08-29 14:25:03.108117 | orchestrator | Pausing for 60 seconds 2025-08-29 14:25:03.108263 | orchestrator | changed: [testbed-manager] 2025-08-29 14:25:03.108279 | orchestrator | 2025-08-29 14:25:03.108293 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-08-29 14:25:07.201759 | orchestrator | changed: [testbed-manager] 2025-08-29 14:25:07.201862 | orchestrator | 2025-08-29 14:25:07.201879 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-08-29 14:26:09.610260 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-08-29 14:26:09.610383 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-08-29 14:26:09.610400 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2025-08-29 14:26:09.610413 | orchestrator | changed: [testbed-manager] 2025-08-29 14:26:09.610427 | orchestrator | 2025-08-29 14:26:09.610439 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-08-29 14:26:20.211013 | orchestrator | changed: [testbed-manager] 2025-08-29 14:26:20.211132 | orchestrator | 2025-08-29 14:26:20.211151 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-08-29 14:26:20.301504 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-08-29 14:26:20.301624 | orchestrator | 2025-08-29 14:26:20.301649 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-08-29 14:26:20.301670 | orchestrator | 2025-08-29 14:26:20.301690 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-08-29 14:26:20.348179 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:26:20.348244 | orchestrator | 2025-08-29 14:26:20.348250 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:26:20.348256 | orchestrator | testbed-manager : ok=64 changed=35 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-08-29 14:26:20.348260 | orchestrator | 2025-08-29 14:26:20.450260 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-08-29 14:26:20.450317 | orchestrator | + deactivate 2025-08-29 14:26:20.450328 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-08-29 14:26:20.450360 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-08-29 14:26:20.450370 | orchestrator | + export PATH 2025-08-29 14:26:20.450379 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-08-29 14:26:20.450389 | orchestrator | + '[' -n '' ']' 2025-08-29 14:26:20.450398 | orchestrator | + hash -r 2025-08-29 14:26:20.450407 | orchestrator | + '[' -n '' ']' 2025-08-29 14:26:20.450416 | orchestrator | + unset VIRTUAL_ENV 2025-08-29 14:26:20.450425 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-08-29 14:26:20.450435 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-08-29 14:26:20.450446 | orchestrator | + unset -f deactivate 2025-08-29 14:26:20.450458 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-08-29 14:26:20.459570 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-08-29 14:26:20.459592 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-08-29 14:26:20.459603 | orchestrator | + local max_attempts=60 2025-08-29 14:26:20.459614 | orchestrator | + local name=ceph-ansible 2025-08-29 14:26:20.459625 | orchestrator | + local attempt_num=1 2025-08-29 14:26:20.460737 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 14:26:20.505688 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-08-29 14:26:20.505740 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-08-29 14:26:20.505753 | orchestrator | + local max_attempts=60 2025-08-29 14:26:20.505765 | orchestrator | + local name=kolla-ansible 2025-08-29 14:26:20.505776 | orchestrator | + local attempt_num=1 2025-08-29 14:26:20.506525 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-08-29 14:26:20.544914 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-08-29 14:26:20.544942 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-08-29 14:26:20.544953 | orchestrator | + local max_attempts=60 2025-08-29 14:26:20.544964 | orchestrator | + local name=osism-ansible 2025-08-29 14:26:20.544975 | orchestrator | + local attempt_num=1 2025-08-29 14:26:20.545539 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-08-29 14:26:20.585561 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-08-29 14:26:20.585586 | orchestrator | + [[ true == \t\r\u\e ]] 2025-08-29 14:26:20.585598 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-08-29 14:26:21.354257 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-08-29 14:26:21.579810 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-08-29 14:26:21.579923 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2025-08-29 14:26:21.579939 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2025-08-29 14:26:21.579951 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2025-08-29 14:26:21.579988 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2025-08-29 14:26:21.580009 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2025-08-29 14:26:21.580021 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2025-08-29 14:26:21.580033 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2025-08-29 14:26:21.580044 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2025-08-29 14:26:21.580055 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.3 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2025-08-29 14:26:21.580067 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2025-08-29 14:26:21.580079 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2025-08-29 14:26:21.580090 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2025-08-29 14:26:21.580114 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2025-08-29 14:26:21.580126 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2025-08-29 14:26:21.586696 | orchestrator | ++ semver latest 7.0.0 2025-08-29 14:26:21.643324 | orchestrator | + [[ -1 -ge 0 ]] 2025-08-29 14:26:21.643394 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-08-29 14:26:21.643410 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-08-29 14:26:21.647632 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-08-29 14:26:33.820809 | orchestrator | 2025-08-29 14:26:33 | INFO  | Task 9932824c-8865-4752-9822-ced6c2cf8aba (resolvconf) was prepared for execution. 2025-08-29 14:26:33.820980 | orchestrator | 2025-08-29 14:26:33 | INFO  | It takes a moment until task 9932824c-8865-4752-9822-ced6c2cf8aba (resolvconf) has been started and output is visible here. 2025-08-29 14:26:48.450339 | orchestrator | 2025-08-29 14:26:48.450515 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-08-29 14:26:48.450548 | orchestrator | 2025-08-29 14:26:48.450569 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 14:26:48.450584 | orchestrator | Friday 29 August 2025 14:26:37 +0000 (0:00:00.156) 0:00:00.156 ********* 2025-08-29 14:26:48.450596 | orchestrator | ok: [testbed-manager] 2025-08-29 14:26:48.450609 | orchestrator | 2025-08-29 14:26:48.450626 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-08-29 14:26:48.450639 | orchestrator | Friday 29 August 2025 14:26:42 +0000 (0:00:04.851) 0:00:05.008 ********* 2025-08-29 14:26:48.450689 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:26:48.450701 | orchestrator | 2025-08-29 14:26:48.450712 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-08-29 14:26:48.450723 | orchestrator | Friday 29 August 2025 14:26:42 +0000 (0:00:00.052) 0:00:05.061 ********* 2025-08-29 14:26:48.450735 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-08-29 14:26:48.450747 | orchestrator | 2025-08-29 14:26:48.450758 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-08-29 14:26:48.450769 | orchestrator | Friday 29 August 2025 14:26:42 +0000 (0:00:00.067) 0:00:05.128 ********* 2025-08-29 14:26:48.450781 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-08-29 14:26:48.450792 | orchestrator | 2025-08-29 14:26:48.450803 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-08-29 14:26:48.450814 | orchestrator | Friday 29 August 2025 14:26:42 +0000 (0:00:00.078) 0:00:05.207 ********* 2025-08-29 14:26:48.450824 | orchestrator | ok: [testbed-manager] 2025-08-29 14:26:48.450835 | orchestrator | 2025-08-29 14:26:48.450873 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-08-29 14:26:48.450885 | orchestrator | Friday 29 August 2025 14:26:43 +0000 (0:00:00.900) 0:00:06.108 ********* 2025-08-29 14:26:48.450896 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:26:48.450906 | orchestrator | 2025-08-29 14:26:48.450917 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-08-29 14:26:48.450928 | orchestrator | Friday 29 August 2025 14:26:43 +0000 (0:00:00.056) 0:00:06.164 ********* 2025-08-29 14:26:48.450939 | orchestrator | ok: [testbed-manager] 2025-08-29 14:26:48.450949 | orchestrator | 2025-08-29 14:26:48.450960 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-08-29 14:26:48.450971 | orchestrator | Friday 29 August 2025 14:26:44 +0000 (0:00:00.431) 0:00:06.595 ********* 2025-08-29 14:26:48.450981 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:26:48.450992 | orchestrator | 2025-08-29 14:26:48.451003 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-08-29 14:26:48.451016 | orchestrator | Friday 29 August 2025 14:26:44 +0000 (0:00:00.081) 0:00:06.677 ********* 2025-08-29 14:26:48.451027 | orchestrator | changed: [testbed-manager] 2025-08-29 14:26:48.451037 | orchestrator | 2025-08-29 14:26:48.451048 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-08-29 14:26:48.451059 | orchestrator | Friday 29 August 2025 14:26:44 +0000 (0:00:00.522) 0:00:07.199 ********* 2025-08-29 14:26:48.451070 | orchestrator | changed: [testbed-manager] 2025-08-29 14:26:48.451080 | orchestrator | 2025-08-29 14:26:48.451091 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-08-29 14:26:48.451102 | orchestrator | Friday 29 August 2025 14:26:46 +0000 (0:00:01.066) 0:00:08.266 ********* 2025-08-29 14:26:48.451113 | orchestrator | ok: [testbed-manager] 2025-08-29 14:26:48.451123 | orchestrator | 2025-08-29 14:26:48.451134 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-08-29 14:26:48.451145 | orchestrator | Friday 29 August 2025 14:26:46 +0000 (0:00:00.945) 0:00:09.211 ********* 2025-08-29 14:26:48.451156 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-08-29 14:26:48.451166 | orchestrator | 2025-08-29 14:26:48.451177 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-08-29 14:26:48.451204 | orchestrator | Friday 29 August 2025 14:26:47 +0000 (0:00:00.073) 0:00:09.285 ********* 2025-08-29 14:26:48.451215 | orchestrator | changed: [testbed-manager] 2025-08-29 14:26:48.451226 | orchestrator | 2025-08-29 14:26:48.451237 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:26:48.451249 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-08-29 14:26:48.451268 | orchestrator | 2025-08-29 14:26:48.451279 | orchestrator | 2025-08-29 14:26:48.451290 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:26:48.451301 | orchestrator | Friday 29 August 2025 14:26:48 +0000 (0:00:01.142) 0:00:10.427 ********* 2025-08-29 14:26:48.451312 | orchestrator | =============================================================================== 2025-08-29 14:26:48.451323 | orchestrator | Gathering Facts --------------------------------------------------------- 4.85s 2025-08-29 14:26:48.451334 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.14s 2025-08-29 14:26:48.451344 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.07s 2025-08-29 14:26:48.451355 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.95s 2025-08-29 14:26:48.451366 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 0.90s 2025-08-29 14:26:48.451377 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.52s 2025-08-29 14:26:48.451409 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.43s 2025-08-29 14:26:48.451421 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2025-08-29 14:26:48.451432 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.08s 2025-08-29 14:26:48.451443 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.07s 2025-08-29 14:26:48.451453 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.07s 2025-08-29 14:26:48.451464 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2025-08-29 14:26:48.451475 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.05s 2025-08-29 14:26:48.762374 | orchestrator | + osism apply sshconfig 2025-08-29 14:27:00.806818 | orchestrator | 2025-08-29 14:27:00 | INFO  | Task 2a8fadea-6003-41f7-862a-6ce6794ad4f4 (sshconfig) was prepared for execution. 2025-08-29 14:27:00.806992 | orchestrator | 2025-08-29 14:27:00 | INFO  | It takes a moment until task 2a8fadea-6003-41f7-862a-6ce6794ad4f4 (sshconfig) has been started and output is visible here. 2025-08-29 14:27:12.825752 | orchestrator | 2025-08-29 14:27:12.825962 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-08-29 14:27:12.825985 | orchestrator | 2025-08-29 14:27:12.825997 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-08-29 14:27:12.826009 | orchestrator | Friday 29 August 2025 14:27:04 +0000 (0:00:00.166) 0:00:00.166 ********* 2025-08-29 14:27:12.826076 | orchestrator | ok: [testbed-manager] 2025-08-29 14:27:12.826090 | orchestrator | 2025-08-29 14:27:12.826102 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-08-29 14:27:12.826113 | orchestrator | Friday 29 August 2025 14:27:05 +0000 (0:00:00.610) 0:00:00.777 ********* 2025-08-29 14:27:12.826125 | orchestrator | changed: [testbed-manager] 2025-08-29 14:27:12.826137 | orchestrator | 2025-08-29 14:27:12.826148 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-08-29 14:27:12.826159 | orchestrator | Friday 29 August 2025 14:27:05 +0000 (0:00:00.572) 0:00:01.350 ********* 2025-08-29 14:27:12.826170 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-08-29 14:27:12.826181 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-08-29 14:27:12.826194 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-08-29 14:27:12.826205 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-08-29 14:27:12.826216 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-08-29 14:27:12.826226 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-08-29 14:27:12.826237 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-08-29 14:27:12.826281 | orchestrator | 2025-08-29 14:27:12.826316 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-08-29 14:27:12.826329 | orchestrator | Friday 29 August 2025 14:27:11 +0000 (0:00:05.937) 0:00:07.288 ********* 2025-08-29 14:27:12.826341 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:27:12.826353 | orchestrator | 2025-08-29 14:27:12.826365 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-08-29 14:27:12.826377 | orchestrator | Friday 29 August 2025 14:27:11 +0000 (0:00:00.073) 0:00:07.361 ********* 2025-08-29 14:27:12.826388 | orchestrator | changed: [testbed-manager] 2025-08-29 14:27:12.826400 | orchestrator | 2025-08-29 14:27:12.826412 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:27:12.826428 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 14:27:12.826448 | orchestrator | 2025-08-29 14:27:12.826467 | orchestrator | 2025-08-29 14:27:12.826486 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:27:12.826504 | orchestrator | Friday 29 August 2025 14:27:12 +0000 (0:00:00.583) 0:00:07.945 ********* 2025-08-29 14:27:12.826523 | orchestrator | =============================================================================== 2025-08-29 14:27:12.826543 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.94s 2025-08-29 14:27:12.826564 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.61s 2025-08-29 14:27:12.826584 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.58s 2025-08-29 14:27:12.826598 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.57s 2025-08-29 14:27:12.826609 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2025-08-29 14:27:13.129318 | orchestrator | + osism apply known-hosts 2025-08-29 14:27:25.209623 | orchestrator | 2025-08-29 14:27:25 | INFO  | Task 0502d027-09eb-4c52-a12c-cc4bde95d457 (known-hosts) was prepared for execution. 2025-08-29 14:27:25.209760 | orchestrator | 2025-08-29 14:27:25 | INFO  | It takes a moment until task 0502d027-09eb-4c52-a12c-cc4bde95d457 (known-hosts) has been started and output is visible here. 2025-08-29 14:27:43.219594 | orchestrator | 2025-08-29 14:27:43.219717 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-08-29 14:27:43.219730 | orchestrator | 2025-08-29 14:27:43.219740 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-08-29 14:27:43.219750 | orchestrator | Friday 29 August 2025 14:27:29 +0000 (0:00:00.169) 0:00:00.170 ********* 2025-08-29 14:27:43.219759 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-08-29 14:27:43.219768 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-08-29 14:27:43.219776 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-08-29 14:27:43.219784 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-08-29 14:27:43.219792 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-08-29 14:27:43.219800 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-08-29 14:27:43.219808 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-08-29 14:27:43.219816 | orchestrator | 2025-08-29 14:27:43.219872 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-08-29 14:27:43.219883 | orchestrator | Friday 29 August 2025 14:27:35 +0000 (0:00:06.016) 0:00:06.186 ********* 2025-08-29 14:27:43.219892 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-08-29 14:27:43.219903 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-08-29 14:27:43.219912 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-08-29 14:27:43.219946 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-08-29 14:27:43.219954 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-08-29 14:27:43.219973 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-08-29 14:27:43.219981 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-08-29 14:27:43.219989 | orchestrator | 2025-08-29 14:27:43.219998 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 14:27:43.220006 | orchestrator | Friday 29 August 2025 14:27:35 +0000 (0:00:00.162) 0:00:06.349 ********* 2025-08-29 14:27:43.220015 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKcpw+wz15YXxWzKVyrYgQw/QslVjA3lCbqOaaMNNiDh) 2025-08-29 14:27:43.220028 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0roqYjlrtS8bYtJixfCUk9lpnCAIyCpDquo8FwrMKqtl9pEXpTqQiseVGjZaILD/9b1CRY1S0DCacj5PzKg4YyVgFG8vPTb7ZPlXhdlA7EtbgaRCzQSZJXfEkMdaIyH3BNWA5RodvJM+poMumLnvIAQUlGmuY0lzJQ2QKi6Ry0NdPE2WTU0djon+cQqyoaLUgUWrk8hcNa8IY5B6RwhOl/SdSiHvSOmU4/vWwgsnnnItcvkyFkJa5m/JSel6WuLX4p2X8LyoO3o9JxK6fNqluqhSpW4pQG2HpP2rgUY9CiXuA7ss/YJb+efryiqcAyrCuz2U8dVgZWbGzoNaBbppYpLpn1WH2uGH6QKkXpcKNqgc4lCp+tsAP2EfaQ7rAWec1cmKgwqsbnkFTMlKvPJ6bXR7uekG5I+hQjIziPwqDU68DpKlV7AZr/dJWQ/hZVkcJXqbuhUAAnPjOYPrAC2W2o9G56xSxiV+ECKzpSdu9acb2up/ityZs6KazkRoFkck=) 2025-08-29 14:27:43.220040 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJo626X34pTBtCTh4LLWKUao8Q4WxxjSbkKi49RW3z+laA7FjZHQhfk4u8urefMN9N5M48a9hg4sIuHPrjZjIxg=) 2025-08-29 14:27:43.220050 | orchestrator | 2025-08-29 14:27:43.220058 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 14:27:43.220066 | orchestrator | Friday 29 August 2025 14:27:37 +0000 (0:00:02.230) 0:00:08.579 ********* 2025-08-29 14:27:43.220091 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCxjzyAcyjJbyOaI4suImCBnGjZv9FshT+XpTWARV8lTuEo8w+XFEiZj9LrD5L3pDVwQphvC6erUG+NW8wK6pxRCG3+C2dJvL3qtoKsITBTkHsMnJMaDkMlilTLSR93FzafxJR+Lbmc/xdbRXCuN+Z8hDF/vsFg0p0vcVOowfVhGVlwMwlxusBnsukuSD5qa2UpiERpecAtDqtGSIBmeAAHXqER8nMZdsBFgPpQEhCWZCW9lL2tiiMJrkKe2ePIJHZ9fKAQc4f/GvNEglSYThizEccthRQgqbcUAmP3RlF2NgGhY/r9gH/eyDTwNFu/m6tPHu6Rn9LIqnxagCJLE2O1bvjRkTtX7TG/JMl3rd4toDQ2AnhUmePtt1qHzXfRdErIzHDMo1rhceu5sqtT+zN/OAfLkyfSvCmchQ5ZuOIuRPxIWH0Z1trd0QiV01qoqrOyXtj331dJRatfn6W7ZMhvIEoad+X05fFRr5uvZv4eGJQWr6Vhg5otwg0EnaeTz5E=) 2025-08-29 14:27:43.220101 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBF09N4LP3d134iGs5EWrnjpjfYAeEiLzGARAZYcsg22ZOixlnZ4BmNVZkO9Gfr5RdcT4d1Uc8dSU6wWLq0BH93A=) 2025-08-29 14:27:43.220111 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDqixe/beZwce6vABEcSyBHvgM/pmlX+GInK9ni+qEqS) 2025-08-29 14:27:43.220120 | orchestrator | 2025-08-29 14:27:43.220129 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 14:27:43.220138 | orchestrator | Friday 29 August 2025 14:27:38 +0000 (0:00:01.096) 0:00:09.676 ********* 2025-08-29 14:27:43.220147 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCxYDv4ykNabjhD69QGK2188/TTrTDREMQNgzLt6ORZtG4LyjFUEKpJyTuAFJZMPpapr6nMt33u3QvwAfuO2YCWnrc2fuH8/ZWIo5O5qFX8SoUSuLE+tqnhqs2uMJHD9jtLTcPxooLrz9aU6U/XUdyfcw+Pj2w+PvtK/PsZgdyunt1mBwhCMF+Ewy0J5wOm0rgP4Q09NE0bCrjqk9PWQzOJflA0FLl783ltENbL98K1Q04BgqTgIGaR35D7fOrNcyLW8P6Mv39ffI/Ma7RxLBS2Q1okWH2V3zH5I6QZy1z06npIj/wjxI/4xwtehSyBozoG91OxS8TEcPnMqE3Q2kJZ88hmXVlC2QxVyevh/k6iBxROywGv8ojqYDwnJ5aRoubsmRnhP1kKemB847g50EsRzrmp/N64nzhx7F0yVvSUZ5Da1psUn57QAQYpLB7FLcKTC2RcH/VQ0gzYhOjfGzxq+wQyPWR6F8dJK2mEhNILPO/rca5aqC/WgMfkVUx4vKE=) 2025-08-29 14:27:43.220163 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIz5p8B4V3e7x3dsXJQXNVOBAmoNNo8sb5TzvBKXT+18) 2025-08-29 14:27:43.220172 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKwdEUhTdwe5w6mmQbAocK0KgZlsltSAp2i1IIqMBWf6GWJNRhXJNyts8YEppX5t7E4D4EzaO4IbEJS04KSmKUY=) 2025-08-29 14:27:43.220181 | orchestrator | 2025-08-29 14:27:43.220190 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 14:27:43.220199 | orchestrator | Friday 29 August 2025 14:27:39 +0000 (0:00:01.114) 0:00:10.790 ********* 2025-08-29 14:27:43.220262 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC31ZgmndcmFs6acnqRZk2D/IuAEODx+DoQyuuL/H0ma6rS4HengaoqNcNTqS18oRph5W4/zv5i+MEGnlneAbQV3tUZYtayms21NJvvQ+oxXpdor/vSpb1GGuz/15A7YAg8GgShfrD1idjLxyK39l2lDvma/i97doJJfkAzL45bKgEtFRXvfqNDsPiU4B2a3eT1IQYloPUHzpXzmqvsep1upZyFId2kaBB9lSPfE6Kj2CWlKj00PzJrzGEihzFQuiiamqETUvGyDeWboZNlsC2KtMpdeI7SHqWPIRT8yTTsuYa46PDrmamS2ZLNiUNmJjcGX8QNX0oveMjT+/3GFpJK0NTmN1Jo4yFHOWDrc9ELo1m7kT0tNcoJAmu6sB5BYBPiFbI5Z98WgCSYlrEtfCcAS86XMu6Z+9SOaXrEN0bxcnlwDitiTLJ5kRjEraeVwt/9eUOqNczkFQ79UvH3E0xuZ5lpUXhnBSkUtEV0OmefKKn4ClsXN2MlmgPmNgFR8Dk=) 2025-08-29 14:27:43.220272 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAVQ+TfNZtdKDqGAfhCbIY5Asc2/IUpS9HrFFCUfL8PhSCJ6cnpp5IasM1QCTEcF64pJMzGM+Zxl6rJ90RrYvQA=) 2025-08-29 14:27:43.220281 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPRg8QxvVKmzgC5JGcHHh0f6PEmDLLkzhVbnIcbH6zbc) 2025-08-29 14:27:43.220290 | orchestrator | 2025-08-29 14:27:43.220299 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 14:27:43.220308 | orchestrator | Friday 29 August 2025 14:27:40 +0000 (0:00:01.089) 0:00:11.880 ********* 2025-08-29 14:27:43.220317 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIITqNyWisxDoo6CoJTBdOHwd1sbnI5m8C0Fqt1Hd7yBU) 2025-08-29 14:27:43.220326 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCpyTIj+cgCXDzg+fla7YZyJEKsw25TLnTPgeKP2m45lzBSH19SzOTDZNCvsJ/BGTcMUEnkMcndx3eP9AaFb6psbYAQt044Og/EKz8q0wB7E0EoC+q5VPVg115TIhZZSvbE0FCz8xvoypJkXwyr0bCvYgoeBbcJssvdLGrNdoMRiuUVooAbQjM241vBcdQKLmgnaCXkhrdcegMPf6+U5v+8A4JAXo7h0nzaa9dk7/Pq52uVa1YIOa4qm6CBZFsKkwjOc28V9xH1bhPBS+bMLOCIHsPehfqr15c88JnH8+xy1WFEE0GqL9CSMgw5BxccIGZyY6tJo8TziOHGM12+2sxUDs8Vv5yMMKeYmYgsVs4UuDQYQb3zrpgUaT2DO8kdBfWq7EWQE9mPlRJw7X9OUwYdNTXGXWvuAiXNdezReth118pgfwDLIc1BLgV5fCXZp85sYBqbASYIuKH0FZU2TIntBse6d2OsVjuWtl84+FUkO+sI/BrI38Wntf2qswuMYdM=) 2025-08-29 14:27:43.220336 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGrlX9GEEbyL1ESnZlmAXZJym3F14SJDoFpaSuqiCorJThqnH6DIB4+1cZZTo/h5318MZAIurRaNGEdB98ADXJg=) 2025-08-29 14:27:43.220345 | orchestrator | 2025-08-29 14:27:43.220354 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 14:27:43.220363 | orchestrator | Friday 29 August 2025 14:27:42 +0000 (0:00:01.147) 0:00:13.028 ********* 2025-08-29 14:27:43.220378 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFoHgHyzysiEWWvuBsIJVPtZbPNEMNDbWxULbKssxMZUtjYoChUec7E8Djk3PREq5y1aBOp24vqdFDhMCOHh7vs=) 2025-08-29 14:27:54.339144 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCnh8mHXyPuLHVwqNt5rwfLRuWzLbKEEcW2rekce+Wo7dMYNWzFcYchfNbB+ATq8iDQkos/LC/Tyd9A28MuwWE1uhfGNlKvuytB6xmSxBf2CpEvnmjQCmIs+TxftdbRYPi3+CJoG3Tcc6fCzCMmppDTXyC5n7aen+uvn2dJi4hHKz5CLsrIhSkmbTunFvpgi9WItsWEzJQRS9/V5VUg5DiqSapetQN/dJblYYdNiK9B+BxpcVe+LP3+MkH4g32R4R3OuTSidQR6cGO8paELF1U/Rk8s3Ajp5owoTKhkl0mVWmypYhfTWofZDYwSKPFbfDTH3Zl8CYECpsZOBBrJqTAPtTLqs8x9PL/yTQrscy7OHSuydKVHTx5MomYsftssbhtuAQFi28YJyb8Xe94dRLwCFPjf3/p/bSXp4mtQZfAome5+Wqi02P3YwE9j3dEsQwpH+KaPJHkjAMj/BUGaEZaio5Arj/FMKhnodVQ4BSD4pZeL4ccuF8J2oxT11SHg6ic=) 2025-08-29 14:27:54.339269 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGjMINJyIr8bQXPXDNcJ05Szt8qviawzu0dioUC/bkeM) 2025-08-29 14:27:54.339281 | orchestrator | 2025-08-29 14:27:54.339291 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 14:27:54.339299 | orchestrator | Friday 29 August 2025 14:27:43 +0000 (0:00:01.135) 0:00:14.163 ********* 2025-08-29 14:27:54.339306 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMLUfJH5MdOnFiM++8lhKPb5mNZddWvtJrZgybWojI38) 2025-08-29 14:27:54.339314 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3xplBApUNAOhe/hcYNy8bVoi/jXBM3yAHZ0h9bK/1KerNErJHeCzLp4smCM5Fd+DVDANcONgsVG8Ls3stQTLoqwame54BjUhZhuKcFODDlSzhdnSHsw1qIebLWNbaEC+bG1sxMNEZUfA+guMbBNPCr0CUoFtnbnvoG88AKvZo6bg52Jctr+5sTKfm+E15rEft+e/93ncNNhNm9dIZmUaa12kCflDwc4Gy18JLr9SaWThTwdV3U9YS+438w0Jl6J2nz0wgKBHX7U/asVoqDItLZ22+hvsrs16DvHRQJHrrMF+8N2hisaVVFOV8a7TqMhvXk1JemPA3ROVYMmYYsZV+gCkCdkEuu+mJgmB3zd+WMTjOVqJnOrCa5yR4HuhNMxRH7wH/Ri1IL4oeh7ikFL8TauD/NC/gIfq/OsrH62oxETXPqOebgYBtrl6KENe2itjvc18zoVqZLDY3XGqKcnpbsSNACvgJVIdL0zRji7yF29SWB6ZBC8dlFe6//YXsTsk=) 2025-08-29 14:27:54.339322 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHig2spbv86qOmwI6Z7TnNiDWSvy94eFgkD7nqY+5UuCGM6mgRzBD8LRbOS0V6ff+LR564FkfuqLE3uhlZho7uw=) 2025-08-29 14:27:54.339329 | orchestrator | 2025-08-29 14:27:54.339335 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-08-29 14:27:54.339342 | orchestrator | Friday 29 August 2025 14:27:44 +0000 (0:00:01.120) 0:00:15.283 ********* 2025-08-29 14:27:54.339350 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-08-29 14:27:54.339356 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-08-29 14:27:54.339362 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-08-29 14:27:54.339368 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-08-29 14:27:54.339374 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-08-29 14:27:54.339379 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-08-29 14:27:54.339385 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-08-29 14:27:54.339391 | orchestrator | 2025-08-29 14:27:54.339405 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-08-29 14:27:54.339413 | orchestrator | Friday 29 August 2025 14:27:49 +0000 (0:00:05.387) 0:00:20.670 ********* 2025-08-29 14:27:54.339421 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-08-29 14:27:54.339429 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-08-29 14:27:54.339435 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-08-29 14:27:54.339441 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-08-29 14:27:54.339464 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-08-29 14:27:54.339470 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-08-29 14:27:54.339476 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-08-29 14:27:54.339481 | orchestrator | 2025-08-29 14:27:54.339502 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 14:27:54.339508 | orchestrator | Friday 29 August 2025 14:27:49 +0000 (0:00:00.172) 0:00:20.843 ********* 2025-08-29 14:27:54.339514 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKcpw+wz15YXxWzKVyrYgQw/QslVjA3lCbqOaaMNNiDh) 2025-08-29 14:27:54.339522 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0roqYjlrtS8bYtJixfCUk9lpnCAIyCpDquo8FwrMKqtl9pEXpTqQiseVGjZaILD/9b1CRY1S0DCacj5PzKg4YyVgFG8vPTb7ZPlXhdlA7EtbgaRCzQSZJXfEkMdaIyH3BNWA5RodvJM+poMumLnvIAQUlGmuY0lzJQ2QKi6Ry0NdPE2WTU0djon+cQqyoaLUgUWrk8hcNa8IY5B6RwhOl/SdSiHvSOmU4/vWwgsnnnItcvkyFkJa5m/JSel6WuLX4p2X8LyoO3o9JxK6fNqluqhSpW4pQG2HpP2rgUY9CiXuA7ss/YJb+efryiqcAyrCuz2U8dVgZWbGzoNaBbppYpLpn1WH2uGH6QKkXpcKNqgc4lCp+tsAP2EfaQ7rAWec1cmKgwqsbnkFTMlKvPJ6bXR7uekG5I+hQjIziPwqDU68DpKlV7AZr/dJWQ/hZVkcJXqbuhUAAnPjOYPrAC2W2o9G56xSxiV+ECKzpSdu9acb2up/ityZs6KazkRoFkck=) 2025-08-29 14:27:54.339528 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJo626X34pTBtCTh4LLWKUao8Q4WxxjSbkKi49RW3z+laA7FjZHQhfk4u8urefMN9N5M48a9hg4sIuHPrjZjIxg=) 2025-08-29 14:27:54.339534 | orchestrator | 2025-08-29 14:27:54.339540 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 14:27:54.339546 | orchestrator | Friday 29 August 2025 14:27:51 +0000 (0:00:01.128) 0:00:21.971 ********* 2025-08-29 14:27:54.339551 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBF09N4LP3d134iGs5EWrnjpjfYAeEiLzGARAZYcsg22ZOixlnZ4BmNVZkO9Gfr5RdcT4d1Uc8dSU6wWLq0BH93A=) 2025-08-29 14:27:54.339558 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCxjzyAcyjJbyOaI4suImCBnGjZv9FshT+XpTWARV8lTuEo8w+XFEiZj9LrD5L3pDVwQphvC6erUG+NW8wK6pxRCG3+C2dJvL3qtoKsITBTkHsMnJMaDkMlilTLSR93FzafxJR+Lbmc/xdbRXCuN+Z8hDF/vsFg0p0vcVOowfVhGVlwMwlxusBnsukuSD5qa2UpiERpecAtDqtGSIBmeAAHXqER8nMZdsBFgPpQEhCWZCW9lL2tiiMJrkKe2ePIJHZ9fKAQc4f/GvNEglSYThizEccthRQgqbcUAmP3RlF2NgGhY/r9gH/eyDTwNFu/m6tPHu6Rn9LIqnxagCJLE2O1bvjRkTtX7TG/JMl3rd4toDQ2AnhUmePtt1qHzXfRdErIzHDMo1rhceu5sqtT+zN/OAfLkyfSvCmchQ5ZuOIuRPxIWH0Z1trd0QiV01qoqrOyXtj331dJRatfn6W7ZMhvIEoad+X05fFRr5uvZv4eGJQWr6Vhg5otwg0EnaeTz5E=) 2025-08-29 14:27:54.339564 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDqixe/beZwce6vABEcSyBHvgM/pmlX+GInK9ni+qEqS) 2025-08-29 14:27:54.339570 | orchestrator | 2025-08-29 14:27:54.339576 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 14:27:54.339582 | orchestrator | Friday 29 August 2025 14:27:52 +0000 (0:00:01.105) 0:00:23.077 ********* 2025-08-29 14:27:54.339587 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIz5p8B4V3e7x3dsXJQXNVOBAmoNNo8sb5TzvBKXT+18) 2025-08-29 14:27:54.339594 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCxYDv4ykNabjhD69QGK2188/TTrTDREMQNgzLt6ORZtG4LyjFUEKpJyTuAFJZMPpapr6nMt33u3QvwAfuO2YCWnrc2fuH8/ZWIo5O5qFX8SoUSuLE+tqnhqs2uMJHD9jtLTcPxooLrz9aU6U/XUdyfcw+Pj2w+PvtK/PsZgdyunt1mBwhCMF+Ewy0J5wOm0rgP4Q09NE0bCrjqk9PWQzOJflA0FLl783ltENbL98K1Q04BgqTgIGaR35D7fOrNcyLW8P6Mv39ffI/Ma7RxLBS2Q1okWH2V3zH5I6QZy1z06npIj/wjxI/4xwtehSyBozoG91OxS8TEcPnMqE3Q2kJZ88hmXVlC2QxVyevh/k6iBxROywGv8ojqYDwnJ5aRoubsmRnhP1kKemB847g50EsRzrmp/N64nzhx7F0yVvSUZ5Da1psUn57QAQYpLB7FLcKTC2RcH/VQ0gzYhOjfGzxq+wQyPWR6F8dJK2mEhNILPO/rca5aqC/WgMfkVUx4vKE=) 2025-08-29 14:27:54.339604 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKwdEUhTdwe5w6mmQbAocK0KgZlsltSAp2i1IIqMBWf6GWJNRhXJNyts8YEppX5t7E4D4EzaO4IbEJS04KSmKUY=) 2025-08-29 14:27:54.339610 | orchestrator | 2025-08-29 14:27:54.339616 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 14:27:54.339621 | orchestrator | Friday 29 August 2025 14:27:53 +0000 (0:00:01.112) 0:00:24.189 ********* 2025-08-29 14:27:54.339639 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC31ZgmndcmFs6acnqRZk2D/IuAEODx+DoQyuuL/H0ma6rS4HengaoqNcNTqS18oRph5W4/zv5i+MEGnlneAbQV3tUZYtayms21NJvvQ+oxXpdor/vSpb1GGuz/15A7YAg8GgShfrD1idjLxyK39l2lDvma/i97doJJfkAzL45bKgEtFRXvfqNDsPiU4B2a3eT1IQYloPUHzpXzmqvsep1upZyFId2kaBB9lSPfE6Kj2CWlKj00PzJrzGEihzFQuiiamqETUvGyDeWboZNlsC2KtMpdeI7SHqWPIRT8yTTsuYa46PDrmamS2ZLNiUNmJjcGX8QNX0oveMjT+/3GFpJK0NTmN1Jo4yFHOWDrc9ELo1m7kT0tNcoJAmu6sB5BYBPiFbI5Z98WgCSYlrEtfCcAS86XMu6Z+9SOaXrEN0bxcnlwDitiTLJ5kRjEraeVwt/9eUOqNczkFQ79UvH3E0xuZ5lpUXhnBSkUtEV0OmefKKn4ClsXN2MlmgPmNgFR8Dk=) 2025-08-29 14:27:58.745026 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAVQ+TfNZtdKDqGAfhCbIY5Asc2/IUpS9HrFFCUfL8PhSCJ6cnpp5IasM1QCTEcF64pJMzGM+Zxl6rJ90RrYvQA=) 2025-08-29 14:27:58.745127 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPRg8QxvVKmzgC5JGcHHh0f6PEmDLLkzhVbnIcbH6zbc) 2025-08-29 14:27:58.745142 | orchestrator | 2025-08-29 14:27:58.745155 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 14:27:58.745167 | orchestrator | Friday 29 August 2025 14:27:54 +0000 (0:00:01.092) 0:00:25.282 ********* 2025-08-29 14:27:58.745180 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCpyTIj+cgCXDzg+fla7YZyJEKsw25TLnTPgeKP2m45lzBSH19SzOTDZNCvsJ/BGTcMUEnkMcndx3eP9AaFb6psbYAQt044Og/EKz8q0wB7E0EoC+q5VPVg115TIhZZSvbE0FCz8xvoypJkXwyr0bCvYgoeBbcJssvdLGrNdoMRiuUVooAbQjM241vBcdQKLmgnaCXkhrdcegMPf6+U5v+8A4JAXo7h0nzaa9dk7/Pq52uVa1YIOa4qm6CBZFsKkwjOc28V9xH1bhPBS+bMLOCIHsPehfqr15c88JnH8+xy1WFEE0GqL9CSMgw5BxccIGZyY6tJo8TziOHGM12+2sxUDs8Vv5yMMKeYmYgsVs4UuDQYQb3zrpgUaT2DO8kdBfWq7EWQE9mPlRJw7X9OUwYdNTXGXWvuAiXNdezReth118pgfwDLIc1BLgV5fCXZp85sYBqbASYIuKH0FZU2TIntBse6d2OsVjuWtl84+FUkO+sI/BrI38Wntf2qswuMYdM=) 2025-08-29 14:27:58.745194 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGrlX9GEEbyL1ESnZlmAXZJym3F14SJDoFpaSuqiCorJThqnH6DIB4+1cZZTo/h5318MZAIurRaNGEdB98ADXJg=) 2025-08-29 14:27:58.745206 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIITqNyWisxDoo6CoJTBdOHwd1sbnI5m8C0Fqt1Hd7yBU) 2025-08-29 14:27:58.745217 | orchestrator | 2025-08-29 14:27:58.745228 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 14:27:58.745239 | orchestrator | Friday 29 August 2025 14:27:55 +0000 (0:00:01.116) 0:00:26.399 ********* 2025-08-29 14:27:58.745250 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCnh8mHXyPuLHVwqNt5rwfLRuWzLbKEEcW2rekce+Wo7dMYNWzFcYchfNbB+ATq8iDQkos/LC/Tyd9A28MuwWE1uhfGNlKvuytB6xmSxBf2CpEvnmjQCmIs+TxftdbRYPi3+CJoG3Tcc6fCzCMmppDTXyC5n7aen+uvn2dJi4hHKz5CLsrIhSkmbTunFvpgi9WItsWEzJQRS9/V5VUg5DiqSapetQN/dJblYYdNiK9B+BxpcVe+LP3+MkH4g32R4R3OuTSidQR6cGO8paELF1U/Rk8s3Ajp5owoTKhkl0mVWmypYhfTWofZDYwSKPFbfDTH3Zl8CYECpsZOBBrJqTAPtTLqs8x9PL/yTQrscy7OHSuydKVHTx5MomYsftssbhtuAQFi28YJyb8Xe94dRLwCFPjf3/p/bSXp4mtQZfAome5+Wqi02P3YwE9j3dEsQwpH+KaPJHkjAMj/BUGaEZaio5Arj/FMKhnodVQ4BSD4pZeL4ccuF8J2oxT11SHg6ic=) 2025-08-29 14:27:58.745284 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFoHgHyzysiEWWvuBsIJVPtZbPNEMNDbWxULbKssxMZUtjYoChUec7E8Djk3PREq5y1aBOp24vqdFDhMCOHh7vs=) 2025-08-29 14:27:58.745296 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGjMINJyIr8bQXPXDNcJ05Szt8qviawzu0dioUC/bkeM) 2025-08-29 14:27:58.745307 | orchestrator | 2025-08-29 14:27:58.745318 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-08-29 14:27:58.745329 | orchestrator | Friday 29 August 2025 14:27:56 +0000 (0:00:01.161) 0:00:27.561 ********* 2025-08-29 14:27:58.745339 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMLUfJH5MdOnFiM++8lhKPb5mNZddWvtJrZgybWojI38) 2025-08-29 14:27:58.745351 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3xplBApUNAOhe/hcYNy8bVoi/jXBM3yAHZ0h9bK/1KerNErJHeCzLp4smCM5Fd+DVDANcONgsVG8Ls3stQTLoqwame54BjUhZhuKcFODDlSzhdnSHsw1qIebLWNbaEC+bG1sxMNEZUfA+guMbBNPCr0CUoFtnbnvoG88AKvZo6bg52Jctr+5sTKfm+E15rEft+e/93ncNNhNm9dIZmUaa12kCflDwc4Gy18JLr9SaWThTwdV3U9YS+438w0Jl6J2nz0wgKBHX7U/asVoqDItLZ22+hvsrs16DvHRQJHrrMF+8N2hisaVVFOV8a7TqMhvXk1JemPA3ROVYMmYYsZV+gCkCdkEuu+mJgmB3zd+WMTjOVqJnOrCa5yR4HuhNMxRH7wH/Ri1IL4oeh7ikFL8TauD/NC/gIfq/OsrH62oxETXPqOebgYBtrl6KENe2itjvc18zoVqZLDY3XGqKcnpbsSNACvgJVIdL0zRji7yF29SWB6ZBC8dlFe6//YXsTsk=) 2025-08-29 14:27:58.745362 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHig2spbv86qOmwI6Z7TnNiDWSvy94eFgkD7nqY+5UuCGM6mgRzBD8LRbOS0V6ff+LR564FkfuqLE3uhlZho7uw=) 2025-08-29 14:27:58.745373 | orchestrator | 2025-08-29 14:27:58.745384 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-08-29 14:27:58.745395 | orchestrator | Friday 29 August 2025 14:27:57 +0000 (0:00:01.069) 0:00:28.631 ********* 2025-08-29 14:27:58.745407 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-08-29 14:27:58.745418 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-08-29 14:27:58.745444 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-08-29 14:27:58.745456 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-08-29 14:27:58.745467 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-08-29 14:27:58.745477 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-08-29 14:27:58.745488 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-08-29 14:27:58.745499 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:27:58.745510 | orchestrator | 2025-08-29 14:27:58.745520 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-08-29 14:27:58.745532 | orchestrator | Friday 29 August 2025 14:27:57 +0000 (0:00:00.176) 0:00:28.807 ********* 2025-08-29 14:27:58.745543 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:27:58.745553 | orchestrator | 2025-08-29 14:27:58.745564 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-08-29 14:27:58.745576 | orchestrator | Friday 29 August 2025 14:27:57 +0000 (0:00:00.064) 0:00:28.872 ********* 2025-08-29 14:27:58.745588 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:27:58.745600 | orchestrator | 2025-08-29 14:27:58.745612 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-08-29 14:27:58.745623 | orchestrator | Friday 29 August 2025 14:27:57 +0000 (0:00:00.066) 0:00:28.938 ********* 2025-08-29 14:27:58.745635 | orchestrator | changed: [testbed-manager] 2025-08-29 14:27:58.745647 | orchestrator | 2025-08-29 14:27:58.745658 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:27:58.745670 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-08-29 14:27:58.745683 | orchestrator | 2025-08-29 14:27:58.745695 | orchestrator | 2025-08-29 14:27:58.745707 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:27:58.745725 | orchestrator | Friday 29 August 2025 14:27:58 +0000 (0:00:00.508) 0:00:29.447 ********* 2025-08-29 14:27:58.745737 | orchestrator | =============================================================================== 2025-08-29 14:27:58.745749 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.02s 2025-08-29 14:27:58.745760 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.39s 2025-08-29 14:27:58.745790 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 2.23s 2025-08-29 14:27:58.745802 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2025-08-29 14:27:58.745868 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2025-08-29 14:27:58.745882 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2025-08-29 14:27:58.745894 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2025-08-29 14:27:58.745905 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2025-08-29 14:27:58.745916 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2025-08-29 14:27:58.745927 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-08-29 14:27:58.745938 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-08-29 14:27:58.745949 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-08-29 14:27:58.745960 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-08-29 14:27:58.745970 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-08-29 14:27:58.745981 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-08-29 14:27:58.745992 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-08-29 14:27:58.746003 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.51s 2025-08-29 14:27:58.746013 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.18s 2025-08-29 14:27:58.746077 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.17s 2025-08-29 14:27:58.746088 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.16s 2025-08-29 14:27:59.041102 | orchestrator | + osism apply squid 2025-08-29 14:28:11.037482 | orchestrator | 2025-08-29 14:28:11 | INFO  | Task 43453bc7-4231-47ce-80a8-0f926e9f99e7 (squid) was prepared for execution. 2025-08-29 14:28:11.037607 | orchestrator | 2025-08-29 14:28:11 | INFO  | It takes a moment until task 43453bc7-4231-47ce-80a8-0f926e9f99e7 (squid) has been started and output is visible here. 2025-08-29 14:30:05.266619 | orchestrator | 2025-08-29 14:30:05.266731 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-08-29 14:30:05.266800 | orchestrator | 2025-08-29 14:30:05.266814 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-08-29 14:30:05.266823 | orchestrator | Friday 29 August 2025 14:28:15 +0000 (0:00:00.167) 0:00:00.167 ********* 2025-08-29 14:30:05.266833 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-08-29 14:30:05.266842 | orchestrator | 2025-08-29 14:30:05.266851 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-08-29 14:30:05.266876 | orchestrator | Friday 29 August 2025 14:28:15 +0000 (0:00:00.089) 0:00:00.257 ********* 2025-08-29 14:30:05.266885 | orchestrator | ok: [testbed-manager] 2025-08-29 14:30:05.266921 | orchestrator | 2025-08-29 14:30:05.266936 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-08-29 14:30:05.266951 | orchestrator | Friday 29 August 2025 14:28:16 +0000 (0:00:01.493) 0:00:01.751 ********* 2025-08-29 14:30:05.266993 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-08-29 14:30:05.267030 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-08-29 14:30:05.267045 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-08-29 14:30:05.267062 | orchestrator | 2025-08-29 14:30:05.267075 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-08-29 14:30:05.267112 | orchestrator | Friday 29 August 2025 14:28:17 +0000 (0:00:01.217) 0:00:02.968 ********* 2025-08-29 14:30:05.267127 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-08-29 14:30:05.267143 | orchestrator | 2025-08-29 14:30:05.267152 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-08-29 14:30:05.267174 | orchestrator | Friday 29 August 2025 14:28:18 +0000 (0:00:01.051) 0:00:04.020 ********* 2025-08-29 14:30:05.267184 | orchestrator | ok: [testbed-manager] 2025-08-29 14:30:05.267193 | orchestrator | 2025-08-29 14:30:05.267202 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-08-29 14:30:05.267210 | orchestrator | Friday 29 August 2025 14:28:19 +0000 (0:00:00.377) 0:00:04.398 ********* 2025-08-29 14:30:05.267219 | orchestrator | changed: [testbed-manager] 2025-08-29 14:30:05.267228 | orchestrator | 2025-08-29 14:30:05.267236 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-08-29 14:30:05.267259 | orchestrator | Friday 29 August 2025 14:28:20 +0000 (0:00:00.921) 0:00:05.319 ********* 2025-08-29 14:30:05.267268 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-08-29 14:30:05.267277 | orchestrator | ok: [testbed-manager] 2025-08-29 14:30:05.267286 | orchestrator | 2025-08-29 14:30:05.267295 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-08-29 14:30:05.267303 | orchestrator | Friday 29 August 2025 14:28:52 +0000 (0:00:31.963) 0:00:37.283 ********* 2025-08-29 14:30:05.267312 | orchestrator | changed: [testbed-manager] 2025-08-29 14:30:05.267321 | orchestrator | 2025-08-29 14:30:05.267330 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-08-29 14:30:05.267338 | orchestrator | Friday 29 August 2025 14:29:04 +0000 (0:00:12.079) 0:00:49.363 ********* 2025-08-29 14:30:05.267347 | orchestrator | Pausing for 60 seconds 2025-08-29 14:30:05.267356 | orchestrator | changed: [testbed-manager] 2025-08-29 14:30:05.267365 | orchestrator | 2025-08-29 14:30:05.267376 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-08-29 14:30:05.267392 | orchestrator | Friday 29 August 2025 14:30:04 +0000 (0:01:00.087) 0:01:49.451 ********* 2025-08-29 14:30:05.267453 | orchestrator | ok: [testbed-manager] 2025-08-29 14:30:05.267468 | orchestrator | 2025-08-29 14:30:05.267495 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-08-29 14:30:05.267504 | orchestrator | Friday 29 August 2025 14:30:04 +0000 (0:00:00.058) 0:01:49.510 ********* 2025-08-29 14:30:05.267513 | orchestrator | changed: [testbed-manager] 2025-08-29 14:30:05.267522 | orchestrator | 2025-08-29 14:30:05.267530 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:30:05.267539 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:30:05.267548 | orchestrator | 2025-08-29 14:30:05.267557 | orchestrator | 2025-08-29 14:30:05.267566 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:30:05.267574 | orchestrator | Friday 29 August 2025 14:30:04 +0000 (0:00:00.639) 0:01:50.149 ********* 2025-08-29 14:30:05.267583 | orchestrator | =============================================================================== 2025-08-29 14:30:05.267592 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.09s 2025-08-29 14:30:05.267601 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.96s 2025-08-29 14:30:05.267609 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.08s 2025-08-29 14:30:05.267618 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.49s 2025-08-29 14:30:05.267636 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.22s 2025-08-29 14:30:05.267645 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.05s 2025-08-29 14:30:05.267654 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.92s 2025-08-29 14:30:05.267662 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.64s 2025-08-29 14:30:05.267671 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.38s 2025-08-29 14:30:05.267680 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.09s 2025-08-29 14:30:05.267688 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.06s 2025-08-29 14:30:05.587410 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-08-29 14:30:05.588337 | orchestrator | ++ semver latest 9.0.0 2025-08-29 14:30:05.644973 | orchestrator | + [[ -1 -lt 0 ]] 2025-08-29 14:30:05.645071 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-08-29 14:30:05.646217 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-08-29 14:30:17.784259 | orchestrator | 2025-08-29 14:30:17 | INFO  | Task 92254016-c9f7-496c-bf2f-03956339a5c3 (operator) was prepared for execution. 2025-08-29 14:30:17.784379 | orchestrator | 2025-08-29 14:30:17 | INFO  | It takes a moment until task 92254016-c9f7-496c-bf2f-03956339a5c3 (operator) has been started and output is visible here. 2025-08-29 14:30:33.716764 | orchestrator | 2025-08-29 14:30:33.716901 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-08-29 14:30:33.716920 | orchestrator | 2025-08-29 14:30:33.716935 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-08-29 14:30:33.716973 | orchestrator | Friday 29 August 2025 14:30:21 +0000 (0:00:00.199) 0:00:00.199 ********* 2025-08-29 14:30:33.716988 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:30:33.717005 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:30:33.717018 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:30:33.717031 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:30:33.717045 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:30:33.717058 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:30:33.717072 | orchestrator | 2025-08-29 14:30:33.717087 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-08-29 14:30:33.717096 | orchestrator | Friday 29 August 2025 14:30:25 +0000 (0:00:03.627) 0:00:03.827 ********* 2025-08-29 14:30:33.717104 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:30:33.717112 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:30:33.717120 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:30:33.717128 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:30:33.717136 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:30:33.717146 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:30:33.717160 | orchestrator | 2025-08-29 14:30:33.717174 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-08-29 14:30:33.717188 | orchestrator | 2025-08-29 14:30:33.717203 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-08-29 14:30:33.717217 | orchestrator | Friday 29 August 2025 14:30:26 +0000 (0:00:00.693) 0:00:04.520 ********* 2025-08-29 14:30:33.717230 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:30:33.717244 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:30:33.717259 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:30:33.717273 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:30:33.717286 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:30:33.717301 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:30:33.717316 | orchestrator | 2025-08-29 14:30:33.717330 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-08-29 14:30:33.717345 | orchestrator | Friday 29 August 2025 14:30:26 +0000 (0:00:00.172) 0:00:04.692 ********* 2025-08-29 14:30:33.717360 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:30:33.717371 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:30:33.717381 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:30:33.717389 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:30:33.717417 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:30:33.717427 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:30:33.717435 | orchestrator | 2025-08-29 14:30:33.717444 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-08-29 14:30:33.717453 | orchestrator | Friday 29 August 2025 14:30:26 +0000 (0:00:00.217) 0:00:04.910 ********* 2025-08-29 14:30:33.717461 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:30:33.717469 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:30:33.717477 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:30:33.717485 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:30:33.717493 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:30:33.717501 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:30:33.717508 | orchestrator | 2025-08-29 14:30:33.717516 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-08-29 14:30:33.717525 | orchestrator | Friday 29 August 2025 14:30:27 +0000 (0:00:00.613) 0:00:05.524 ********* 2025-08-29 14:30:33.717533 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:30:33.717541 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:30:33.717548 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:30:33.717556 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:30:33.717564 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:30:33.717571 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:30:33.717579 | orchestrator | 2025-08-29 14:30:33.717587 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-08-29 14:30:33.717594 | orchestrator | Friday 29 August 2025 14:30:27 +0000 (0:00:00.758) 0:00:06.282 ********* 2025-08-29 14:30:33.717602 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-08-29 14:30:33.717610 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-08-29 14:30:33.717618 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-08-29 14:30:33.717626 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-08-29 14:30:33.717633 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-08-29 14:30:33.717641 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-08-29 14:30:33.717649 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-08-29 14:30:33.717656 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-08-29 14:30:33.717664 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-08-29 14:30:33.717672 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-08-29 14:30:33.717679 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-08-29 14:30:33.717687 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-08-29 14:30:33.717695 | orchestrator | 2025-08-29 14:30:33.717703 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-08-29 14:30:33.717710 | orchestrator | Friday 29 August 2025 14:30:28 +0000 (0:00:01.158) 0:00:07.440 ********* 2025-08-29 14:30:33.717718 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:30:33.717726 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:30:33.717755 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:30:33.717763 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:30:33.717771 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:30:33.717779 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:30:33.717786 | orchestrator | 2025-08-29 14:30:33.717794 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-08-29 14:30:33.717804 | orchestrator | Friday 29 August 2025 14:30:30 +0000 (0:00:01.246) 0:00:08.687 ********* 2025-08-29 14:30:33.717811 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-08-29 14:30:33.717819 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-08-29 14:30:33.717827 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-08-29 14:30:33.717835 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-08-29 14:30:33.717858 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-08-29 14:30:33.717873 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-08-29 14:30:33.717881 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-08-29 14:30:33.717889 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-08-29 14:30:33.717897 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-08-29 14:30:33.717904 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-08-29 14:30:33.717912 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-08-29 14:30:33.717920 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-08-29 14:30:33.717928 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-08-29 14:30:33.717935 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-08-29 14:30:33.717943 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-08-29 14:30:33.717951 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-08-29 14:30:33.717958 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-08-29 14:30:33.717966 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-08-29 14:30:33.717974 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-08-29 14:30:33.717982 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-08-29 14:30:33.717989 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-08-29 14:30:33.717997 | orchestrator | 2025-08-29 14:30:33.718005 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-08-29 14:30:33.718014 | orchestrator | Friday 29 August 2025 14:30:31 +0000 (0:00:01.259) 0:00:09.946 ********* 2025-08-29 14:30:33.718076 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:30:33.718084 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:30:33.718092 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:30:33.718100 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:30:33.718110 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:30:33.718123 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:30:33.718138 | orchestrator | 2025-08-29 14:30:33.718151 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-08-29 14:30:33.718162 | orchestrator | Friday 29 August 2025 14:30:31 +0000 (0:00:00.183) 0:00:10.129 ********* 2025-08-29 14:30:33.718170 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:30:33.718178 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:30:33.718186 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:30:33.718193 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:30:33.718201 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:30:33.718209 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:30:33.718217 | orchestrator | 2025-08-29 14:30:33.718225 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-08-29 14:30:33.718232 | orchestrator | Friday 29 August 2025 14:30:32 +0000 (0:00:00.573) 0:00:10.703 ********* 2025-08-29 14:30:33.718240 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:30:33.718248 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:30:33.718259 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:30:33.718273 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:30:33.718287 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:30:33.718302 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:30:33.718318 | orchestrator | 2025-08-29 14:30:33.718333 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-08-29 14:30:33.718348 | orchestrator | Friday 29 August 2025 14:30:32 +0000 (0:00:00.194) 0:00:10.897 ********* 2025-08-29 14:30:33.718360 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-08-29 14:30:33.718371 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:30:33.718384 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-08-29 14:30:33.718397 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-08-29 14:30:33.718424 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:30:33.718438 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:30:33.718451 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 14:30:33.718465 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-08-29 14:30:33.718479 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:30:33.718492 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:30:33.718506 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-08-29 14:30:33.718519 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:30:33.718532 | orchestrator | 2025-08-29 14:30:33.718546 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-08-29 14:30:33.718560 | orchestrator | Friday 29 August 2025 14:30:33 +0000 (0:00:00.692) 0:00:11.590 ********* 2025-08-29 14:30:33.718574 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:30:33.718587 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:30:33.718598 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:30:33.718606 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:30:33.718614 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:30:33.718621 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:30:33.718629 | orchestrator | 2025-08-29 14:30:33.718644 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-08-29 14:30:33.718653 | orchestrator | Friday 29 August 2025 14:30:33 +0000 (0:00:00.173) 0:00:11.763 ********* 2025-08-29 14:30:33.718661 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:30:33.718668 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:30:33.718676 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:30:33.718684 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:30:33.718692 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:30:33.718699 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:30:33.718711 | orchestrator | 2025-08-29 14:30:33.718721 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-08-29 14:30:33.718763 | orchestrator | Friday 29 August 2025 14:30:33 +0000 (0:00:00.203) 0:00:11.966 ********* 2025-08-29 14:30:33.718778 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:30:33.718793 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:30:33.718806 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:30:33.718819 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:30:33.718843 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:30:34.957395 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:30:34.957503 | orchestrator | 2025-08-29 14:30:34.957536 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-08-29 14:30:34.957551 | orchestrator | Friday 29 August 2025 14:30:33 +0000 (0:00:00.178) 0:00:12.145 ********* 2025-08-29 14:30:34.957562 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:30:34.957573 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:30:34.957584 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:30:34.957594 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:30:34.957605 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:30:34.957616 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:30:34.957627 | orchestrator | 2025-08-29 14:30:34.957637 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-08-29 14:30:34.957648 | orchestrator | Friday 29 August 2025 14:30:34 +0000 (0:00:00.689) 0:00:12.835 ********* 2025-08-29 14:30:34.957659 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:30:34.957670 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:30:34.957681 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:30:34.957691 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:30:34.957702 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:30:34.957712 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:30:34.957723 | orchestrator | 2025-08-29 14:30:34.957763 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:30:34.957776 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 14:30:34.957810 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 14:30:34.957822 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 14:30:34.957833 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 14:30:34.957844 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 14:30:34.957855 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 14:30:34.957865 | orchestrator | 2025-08-29 14:30:34.957876 | orchestrator | 2025-08-29 14:30:34.957887 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:30:34.957898 | orchestrator | Friday 29 August 2025 14:30:34 +0000 (0:00:00.261) 0:00:13.096 ********* 2025-08-29 14:30:34.957908 | orchestrator | =============================================================================== 2025-08-29 14:30:34.957919 | orchestrator | Gathering Facts --------------------------------------------------------- 3.63s 2025-08-29 14:30:34.957932 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.26s 2025-08-29 14:30:34.957945 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.25s 2025-08-29 14:30:34.957956 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.16s 2025-08-29 14:30:34.957968 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.76s 2025-08-29 14:30:34.957980 | orchestrator | Do not require tty for all users ---------------------------------------- 0.69s 2025-08-29 14:30:34.957991 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.69s 2025-08-29 14:30:34.958003 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.69s 2025-08-29 14:30:34.958015 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.61s 2025-08-29 14:30:34.958117 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.57s 2025-08-29 14:30:34.958130 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.26s 2025-08-29 14:30:34.958141 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.22s 2025-08-29 14:30:34.958152 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.20s 2025-08-29 14:30:34.958163 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.19s 2025-08-29 14:30:34.958173 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.18s 2025-08-29 14:30:34.958184 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.18s 2025-08-29 14:30:34.958195 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.17s 2025-08-29 14:30:34.958206 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.17s 2025-08-29 14:30:35.268801 | orchestrator | + osism apply --environment custom facts 2025-08-29 14:30:37.327363 | orchestrator | 2025-08-29 14:30:37 | INFO  | Trying to run play facts in environment custom 2025-08-29 14:30:47.421908 | orchestrator | 2025-08-29 14:30:47 | INFO  | Task 042e97e1-72c1-45d0-8043-141048fc73f9 (facts) was prepared for execution. 2025-08-29 14:30:47.422083 | orchestrator | 2025-08-29 14:30:47 | INFO  | It takes a moment until task 042e97e1-72c1-45d0-8043-141048fc73f9 (facts) has been started and output is visible here. 2025-08-29 14:31:36.667839 | orchestrator | 2025-08-29 14:31:36.667951 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-08-29 14:31:36.668003 | orchestrator | 2025-08-29 14:31:36.668016 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-08-29 14:31:36.668028 | orchestrator | Friday 29 August 2025 14:30:51 +0000 (0:00:00.103) 0:00:00.103 ********* 2025-08-29 14:31:36.668038 | orchestrator | ok: [testbed-manager] 2025-08-29 14:31:36.668050 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:31:36.668062 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:31:36.668072 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:31:36.668083 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:31:36.668093 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:31:36.668104 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:31:36.668115 | orchestrator | 2025-08-29 14:31:36.668125 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-08-29 14:31:36.668136 | orchestrator | Friday 29 August 2025 14:30:52 +0000 (0:00:01.395) 0:00:01.499 ********* 2025-08-29 14:31:36.668147 | orchestrator | ok: [testbed-manager] 2025-08-29 14:31:36.668157 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:31:36.668168 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:31:36.668179 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:31:36.668189 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:31:36.668200 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:31:36.668210 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:31:36.668221 | orchestrator | 2025-08-29 14:31:36.668231 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-08-29 14:31:36.668242 | orchestrator | 2025-08-29 14:31:36.668253 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-08-29 14:31:36.668263 | orchestrator | Friday 29 August 2025 14:30:53 +0000 (0:00:01.194) 0:00:02.693 ********* 2025-08-29 14:31:36.668274 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:31:36.668285 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:31:36.668295 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:31:36.668308 | orchestrator | 2025-08-29 14:31:36.668320 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-08-29 14:31:36.668332 | orchestrator | Friday 29 August 2025 14:30:54 +0000 (0:00:00.157) 0:00:02.851 ********* 2025-08-29 14:31:36.668344 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:31:36.668356 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:31:36.668368 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:31:36.668380 | orchestrator | 2025-08-29 14:31:36.668392 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-08-29 14:31:36.668404 | orchestrator | Friday 29 August 2025 14:30:54 +0000 (0:00:00.210) 0:00:03.061 ********* 2025-08-29 14:31:36.668415 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:31:36.668427 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:31:36.668439 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:31:36.668451 | orchestrator | 2025-08-29 14:31:36.668463 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-08-29 14:31:36.668476 | orchestrator | Friday 29 August 2025 14:30:54 +0000 (0:00:00.228) 0:00:03.289 ********* 2025-08-29 14:31:36.668489 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:31:36.668502 | orchestrator | 2025-08-29 14:31:36.668515 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-08-29 14:31:36.668527 | orchestrator | Friday 29 August 2025 14:30:54 +0000 (0:00:00.169) 0:00:03.458 ********* 2025-08-29 14:31:36.668539 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:31:36.668551 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:31:36.668563 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:31:36.668574 | orchestrator | 2025-08-29 14:31:36.668587 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-08-29 14:31:36.668599 | orchestrator | Friday 29 August 2025 14:30:55 +0000 (0:00:00.440) 0:00:03.899 ********* 2025-08-29 14:31:36.668611 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:31:36.668631 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:31:36.668644 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:31:36.668656 | orchestrator | 2025-08-29 14:31:36.668668 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-08-29 14:31:36.668680 | orchestrator | Friday 29 August 2025 14:30:55 +0000 (0:00:00.128) 0:00:04.028 ********* 2025-08-29 14:31:36.668690 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:31:36.668723 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:31:36.668742 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:31:36.668759 | orchestrator | 2025-08-29 14:31:36.668779 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-08-29 14:31:36.668795 | orchestrator | Friday 29 August 2025 14:30:56 +0000 (0:00:01.045) 0:00:05.073 ********* 2025-08-29 14:31:36.668810 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:31:36.668825 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:31:36.668840 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:31:36.668857 | orchestrator | 2025-08-29 14:31:36.668875 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-08-29 14:31:36.668892 | orchestrator | Friday 29 August 2025 14:30:56 +0000 (0:00:00.460) 0:00:05.534 ********* 2025-08-29 14:31:36.668910 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:31:36.668929 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:31:36.668945 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:31:36.668956 | orchestrator | 2025-08-29 14:31:36.668985 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-08-29 14:31:36.668997 | orchestrator | Friday 29 August 2025 14:30:57 +0000 (0:00:01.068) 0:00:06.602 ********* 2025-08-29 14:31:36.669008 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:31:36.669019 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:31:36.669029 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:31:36.669040 | orchestrator | 2025-08-29 14:31:36.669051 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-08-29 14:31:36.669062 | orchestrator | Friday 29 August 2025 14:31:15 +0000 (0:00:17.198) 0:00:23.801 ********* 2025-08-29 14:31:36.669072 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:31:36.669083 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:31:36.669094 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:31:36.669104 | orchestrator | 2025-08-29 14:31:36.669115 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-08-29 14:31:36.669143 | orchestrator | Friday 29 August 2025 14:31:15 +0000 (0:00:00.112) 0:00:23.913 ********* 2025-08-29 14:31:36.669160 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:31:36.669171 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:31:36.669181 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:31:36.669192 | orchestrator | 2025-08-29 14:31:36.669203 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-08-29 14:31:36.669214 | orchestrator | Friday 29 August 2025 14:31:22 +0000 (0:00:07.423) 0:00:31.337 ********* 2025-08-29 14:31:36.669224 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:31:36.669235 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:31:36.669246 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:31:36.669257 | orchestrator | 2025-08-29 14:31:36.669268 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-08-29 14:31:36.669279 | orchestrator | Friday 29 August 2025 14:31:23 +0000 (0:00:00.418) 0:00:31.755 ********* 2025-08-29 14:31:36.669289 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-08-29 14:31:36.669300 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-08-29 14:31:36.669311 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-08-29 14:31:36.669322 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-08-29 14:31:36.669332 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-08-29 14:31:36.669343 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-08-29 14:31:36.669353 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-08-29 14:31:36.669371 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-08-29 14:31:36.669382 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-08-29 14:31:36.669392 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-08-29 14:31:36.669403 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-08-29 14:31:36.669413 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-08-29 14:31:36.669424 | orchestrator | 2025-08-29 14:31:36.669435 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-08-29 14:31:36.669445 | orchestrator | Friday 29 August 2025 14:31:31 +0000 (0:00:08.551) 0:00:40.307 ********* 2025-08-29 14:31:36.669456 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:31:36.669467 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:31:36.669477 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:31:36.669488 | orchestrator | 2025-08-29 14:31:36.669499 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-08-29 14:31:36.669509 | orchestrator | 2025-08-29 14:31:36.669520 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-08-29 14:31:36.669531 | orchestrator | Friday 29 August 2025 14:31:32 +0000 (0:00:01.185) 0:00:41.492 ********* 2025-08-29 14:31:36.669541 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:31:36.669552 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:31:36.669563 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:31:36.669573 | orchestrator | ok: [testbed-manager] 2025-08-29 14:31:36.669584 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:31:36.669595 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:31:36.669605 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:31:36.669615 | orchestrator | 2025-08-29 14:31:36.669626 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:31:36.669638 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:31:36.669649 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:31:36.669661 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:31:36.669672 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:31:36.669683 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:31:36.669694 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:31:36.669728 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:31:36.669739 | orchestrator | 2025-08-29 14:31:36.669749 | orchestrator | 2025-08-29 14:31:36.669760 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:31:36.669772 | orchestrator | Friday 29 August 2025 14:31:36 +0000 (0:00:03.858) 0:00:45.351 ********* 2025-08-29 14:31:36.669782 | orchestrator | =============================================================================== 2025-08-29 14:31:36.669793 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.20s 2025-08-29 14:31:36.669804 | orchestrator | Copy fact files --------------------------------------------------------- 8.55s 2025-08-29 14:31:36.669815 | orchestrator | Install required packages (Debian) -------------------------------------- 7.42s 2025-08-29 14:31:36.669826 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.86s 2025-08-29 14:31:36.669843 | orchestrator | Create custom facts directory ------------------------------------------- 1.40s 2025-08-29 14:31:36.669854 | orchestrator | Copy fact file ---------------------------------------------------------- 1.19s 2025-08-29 14:31:36.669871 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.19s 2025-08-29 14:31:36.936583 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.07s 2025-08-29 14:31:36.936691 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.05s 2025-08-29 14:31:36.936749 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.46s 2025-08-29 14:31:36.936761 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.44s 2025-08-29 14:31:36.936772 | orchestrator | Create custom facts directory ------------------------------------------- 0.42s 2025-08-29 14:31:36.936783 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.23s 2025-08-29 14:31:36.936795 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.21s 2025-08-29 14:31:36.936806 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.17s 2025-08-29 14:31:36.936817 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.16s 2025-08-29 14:31:36.936828 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.13s 2025-08-29 14:31:36.936839 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.11s 2025-08-29 14:31:37.278356 | orchestrator | + osism apply bootstrap 2025-08-29 14:31:49.441686 | orchestrator | 2025-08-29 14:31:49 | INFO  | Task 03614379-4068-4fe5-89d0-220cdbd393ee (bootstrap) was prepared for execution. 2025-08-29 14:31:49.441827 | orchestrator | 2025-08-29 14:31:49 | INFO  | It takes a moment until task 03614379-4068-4fe5-89d0-220cdbd393ee (bootstrap) has been started and output is visible here. 2025-08-29 14:32:05.098291 | orchestrator | 2025-08-29 14:32:05.098416 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-08-29 14:32:05.098434 | orchestrator | 2025-08-29 14:32:05.098446 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-08-29 14:32:05.098459 | orchestrator | Friday 29 August 2025 14:31:53 +0000 (0:00:00.169) 0:00:00.169 ********* 2025-08-29 14:32:05.098470 | orchestrator | ok: [testbed-manager] 2025-08-29 14:32:05.098482 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:32:05.098493 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:32:05.098504 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:32:05.098515 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:32:05.098526 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:32:05.098537 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:32:05.098548 | orchestrator | 2025-08-29 14:32:05.098560 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-08-29 14:32:05.098571 | orchestrator | 2025-08-29 14:32:05.098582 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-08-29 14:32:05.098593 | orchestrator | Friday 29 August 2025 14:31:53 +0000 (0:00:00.256) 0:00:00.426 ********* 2025-08-29 14:32:05.098605 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:32:05.098616 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:32:05.098627 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:32:05.098638 | orchestrator | ok: [testbed-manager] 2025-08-29 14:32:05.098649 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:32:05.098660 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:32:05.098670 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:32:05.098682 | orchestrator | 2025-08-29 14:32:05.098721 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-08-29 14:32:05.098732 | orchestrator | 2025-08-29 14:32:05.098743 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-08-29 14:32:05.098755 | orchestrator | Friday 29 August 2025 14:31:57 +0000 (0:00:03.637) 0:00:04.063 ********* 2025-08-29 14:32:05.098766 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-08-29 14:32:05.098803 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-08-29 14:32:05.098816 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-08-29 14:32:05.098830 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-08-29 14:32:05.098844 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-08-29 14:32:05.098857 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-08-29 14:32:05.098869 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-08-29 14:32:05.098882 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-08-29 14:32:05.098895 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-08-29 14:32:05.098907 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-08-29 14:32:05.098919 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-08-29 14:32:05.098932 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-08-29 14:32:05.098945 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-08-29 14:32:05.098958 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-08-29 14:32:05.098971 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-08-29 14:32:05.098985 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-08-29 14:32:05.098997 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-08-29 14:32:05.099010 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-08-29 14:32:05.099023 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-08-29 14:32:05.099036 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-08-29 14:32:05.099048 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-08-29 14:32:05.099061 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:32:05.099074 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-08-29 14:32:05.099087 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-08-29 14:32:05.099100 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-08-29 14:32:05.099112 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-08-29 14:32:05.099125 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-08-29 14:32:05.099138 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:32:05.099150 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-08-29 14:32:05.099163 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-08-29 14:32:05.099175 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-08-29 14:32:05.099186 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:32:05.099197 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-08-29 14:32:05.099225 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 14:32:05.099236 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-08-29 14:32:05.099247 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-08-29 14:32:05.099258 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 14:32:05.099269 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-08-29 14:32:05.099280 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-08-29 14:32:05.099290 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 14:32:05.099301 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:32:05.099312 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-08-29 14:32:05.099323 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-08-29 14:32:05.099333 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-08-29 14:32:05.099344 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-08-29 14:32:05.099355 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-08-29 14:32:05.099394 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-08-29 14:32:05.099406 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-08-29 14:32:05.099417 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:32:05.099428 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-08-29 14:32:05.099439 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-08-29 14:32:05.099449 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-08-29 14:32:05.099460 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:32:05.099471 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-08-29 14:32:05.099482 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-08-29 14:32:05.099493 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:32:05.099504 | orchestrator | 2025-08-29 14:32:05.099514 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-08-29 14:32:05.099525 | orchestrator | 2025-08-29 14:32:05.099536 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-08-29 14:32:05.099547 | orchestrator | Friday 29 August 2025 14:31:57 +0000 (0:00:00.458) 0:00:04.522 ********* 2025-08-29 14:32:05.099558 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:32:05.099569 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:32:05.099580 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:32:05.099590 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:32:05.099601 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:32:05.099612 | orchestrator | ok: [testbed-manager] 2025-08-29 14:32:05.099623 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:32:05.099633 | orchestrator | 2025-08-29 14:32:05.099644 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-08-29 14:32:05.099655 | orchestrator | Friday 29 August 2025 14:31:59 +0000 (0:00:01.219) 0:00:05.741 ********* 2025-08-29 14:32:05.099666 | orchestrator | ok: [testbed-manager] 2025-08-29 14:32:05.099677 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:32:05.099733 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:32:05.099745 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:32:05.099755 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:32:05.099766 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:32:05.099777 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:32:05.099788 | orchestrator | 2025-08-29 14:32:05.099798 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-08-29 14:32:05.099809 | orchestrator | Friday 29 August 2025 14:32:00 +0000 (0:00:01.175) 0:00:06.917 ********* 2025-08-29 14:32:05.099821 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:32:05.099835 | orchestrator | 2025-08-29 14:32:05.099846 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-08-29 14:32:05.099857 | orchestrator | Friday 29 August 2025 14:32:00 +0000 (0:00:00.273) 0:00:07.190 ********* 2025-08-29 14:32:05.099868 | orchestrator | changed: [testbed-manager] 2025-08-29 14:32:05.099879 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:32:05.099890 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:32:05.099900 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:32:05.099911 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:32:05.099922 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:32:05.099932 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:32:05.099943 | orchestrator | 2025-08-29 14:32:05.099954 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-08-29 14:32:05.099965 | orchestrator | Friday 29 August 2025 14:32:02 +0000 (0:00:02.000) 0:00:09.191 ********* 2025-08-29 14:32:05.099975 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:32:05.099987 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:32:05.100007 | orchestrator | 2025-08-29 14:32:05.100019 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-08-29 14:32:05.100030 | orchestrator | Friday 29 August 2025 14:32:02 +0000 (0:00:00.267) 0:00:09.458 ********* 2025-08-29 14:32:05.100041 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:32:05.100052 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:32:05.100063 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:32:05.100079 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:32:05.100090 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:32:05.100100 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:32:05.100111 | orchestrator | 2025-08-29 14:32:05.100122 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-08-29 14:32:05.100133 | orchestrator | Friday 29 August 2025 14:32:03 +0000 (0:00:01.011) 0:00:10.469 ********* 2025-08-29 14:32:05.100143 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:32:05.100154 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:32:05.100165 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:32:05.100176 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:32:05.100186 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:32:05.100197 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:32:05.100207 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:32:05.100218 | orchestrator | 2025-08-29 14:32:05.100229 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-08-29 14:32:05.100240 | orchestrator | Friday 29 August 2025 14:32:04 +0000 (0:00:00.637) 0:00:11.107 ********* 2025-08-29 14:32:05.100251 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:32:05.100261 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:32:05.100272 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:32:05.100282 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:32:05.100293 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:32:05.100304 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:32:05.100314 | orchestrator | ok: [testbed-manager] 2025-08-29 14:32:05.100325 | orchestrator | 2025-08-29 14:32:05.100336 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-08-29 14:32:05.100348 | orchestrator | Friday 29 August 2025 14:32:04 +0000 (0:00:00.413) 0:00:11.520 ********* 2025-08-29 14:32:05.100359 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:32:05.100370 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:32:05.100388 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:32:17.236050 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:32:17.236221 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:32:17.236240 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:32:17.236251 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:32:17.236263 | orchestrator | 2025-08-29 14:32:17.236276 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-08-29 14:32:17.236289 | orchestrator | Friday 29 August 2025 14:32:05 +0000 (0:00:00.269) 0:00:11.790 ********* 2025-08-29 14:32:17.236303 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:32:17.236336 | orchestrator | 2025-08-29 14:32:17.236348 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-08-29 14:32:17.236378 | orchestrator | Friday 29 August 2025 14:32:05 +0000 (0:00:00.301) 0:00:12.091 ********* 2025-08-29 14:32:17.236390 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:32:17.236401 | orchestrator | 2025-08-29 14:32:17.236413 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-08-29 14:32:17.236423 | orchestrator | Friday 29 August 2025 14:32:05 +0000 (0:00:00.297) 0:00:12.389 ********* 2025-08-29 14:32:17.236466 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:32:17.236480 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:32:17.236490 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:32:17.236501 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:32:17.236512 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:32:17.236523 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:32:17.236534 | orchestrator | ok: [testbed-manager] 2025-08-29 14:32:17.236545 | orchestrator | 2025-08-29 14:32:17.236559 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-08-29 14:32:17.236571 | orchestrator | Friday 29 August 2025 14:32:07 +0000 (0:00:01.320) 0:00:13.710 ********* 2025-08-29 14:32:17.236584 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:32:17.236596 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:32:17.236613 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:32:17.236642 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:32:17.236665 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:32:17.236775 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:32:17.236798 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:32:17.236817 | orchestrator | 2025-08-29 14:32:17.236837 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-08-29 14:32:17.236856 | orchestrator | Friday 29 August 2025 14:32:07 +0000 (0:00:00.236) 0:00:13.947 ********* 2025-08-29 14:32:17.236875 | orchestrator | ok: [testbed-manager] 2025-08-29 14:32:17.236894 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:32:17.236912 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:32:17.236930 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:32:17.236942 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:32:17.236953 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:32:17.236964 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:32:17.236975 | orchestrator | 2025-08-29 14:32:17.236986 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-08-29 14:32:17.236997 | orchestrator | Friday 29 August 2025 14:32:07 +0000 (0:00:00.547) 0:00:14.494 ********* 2025-08-29 14:32:17.237008 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:32:17.237019 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:32:17.237030 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:32:17.237041 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:32:17.237052 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:32:17.237063 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:32:17.237074 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:32:17.237085 | orchestrator | 2025-08-29 14:32:17.237096 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-08-29 14:32:17.237109 | orchestrator | Friday 29 August 2025 14:32:08 +0000 (0:00:00.286) 0:00:14.781 ********* 2025-08-29 14:32:17.237120 | orchestrator | ok: [testbed-manager] 2025-08-29 14:32:17.237132 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:32:17.237143 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:32:17.237154 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:32:17.237168 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:32:17.237192 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:32:17.237216 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:32:17.237233 | orchestrator | 2025-08-29 14:32:17.237251 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-08-29 14:32:17.237268 | orchestrator | Friday 29 August 2025 14:32:08 +0000 (0:00:00.670) 0:00:15.451 ********* 2025-08-29 14:32:17.237286 | orchestrator | ok: [testbed-manager] 2025-08-29 14:32:17.237303 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:32:17.237321 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:32:17.237341 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:32:17.237359 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:32:17.237377 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:32:17.237396 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:32:17.237413 | orchestrator | 2025-08-29 14:32:17.237431 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-08-29 14:32:17.237467 | orchestrator | Friday 29 August 2025 14:32:09 +0000 (0:00:01.117) 0:00:16.569 ********* 2025-08-29 14:32:17.237485 | orchestrator | ok: [testbed-manager] 2025-08-29 14:32:17.237503 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:32:17.237522 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:32:17.237541 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:32:17.237558 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:32:17.237577 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:32:17.237591 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:32:17.237602 | orchestrator | 2025-08-29 14:32:17.237613 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-08-29 14:32:17.237625 | orchestrator | Friday 29 August 2025 14:32:11 +0000 (0:00:01.055) 0:00:17.625 ********* 2025-08-29 14:32:17.237662 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:32:17.237675 | orchestrator | 2025-08-29 14:32:17.237709 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-08-29 14:32:17.237720 | orchestrator | Friday 29 August 2025 14:32:11 +0000 (0:00:00.314) 0:00:17.939 ********* 2025-08-29 14:32:17.237731 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:32:17.237742 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:32:17.237753 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:32:17.237763 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:32:17.237774 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:32:17.237785 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:32:17.237796 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:32:17.237806 | orchestrator | 2025-08-29 14:32:17.237817 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-08-29 14:32:17.237828 | orchestrator | Friday 29 August 2025 14:32:12 +0000 (0:00:01.285) 0:00:19.225 ********* 2025-08-29 14:32:17.237838 | orchestrator | ok: [testbed-manager] 2025-08-29 14:32:17.237849 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:32:17.237860 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:32:17.237870 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:32:17.237881 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:32:17.237892 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:32:17.237902 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:32:17.237913 | orchestrator | 2025-08-29 14:32:17.237924 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-08-29 14:32:17.237935 | orchestrator | Friday 29 August 2025 14:32:12 +0000 (0:00:00.239) 0:00:19.464 ********* 2025-08-29 14:32:17.237946 | orchestrator | ok: [testbed-manager] 2025-08-29 14:32:17.237956 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:32:17.237967 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:32:17.237977 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:32:17.237988 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:32:17.238113 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:32:17.238130 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:32:17.238141 | orchestrator | 2025-08-29 14:32:17.238152 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-08-29 14:32:17.238163 | orchestrator | Friday 29 August 2025 14:32:13 +0000 (0:00:00.241) 0:00:19.706 ********* 2025-08-29 14:32:17.238174 | orchestrator | ok: [testbed-manager] 2025-08-29 14:32:17.238184 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:32:17.238195 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:32:17.238205 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:32:17.238216 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:32:17.238227 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:32:17.238237 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:32:17.238248 | orchestrator | 2025-08-29 14:32:17.238259 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-08-29 14:32:17.238270 | orchestrator | Friday 29 August 2025 14:32:13 +0000 (0:00:00.274) 0:00:19.980 ********* 2025-08-29 14:32:17.238294 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:32:17.238308 | orchestrator | 2025-08-29 14:32:17.238318 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-08-29 14:32:17.238329 | orchestrator | Friday 29 August 2025 14:32:13 +0000 (0:00:00.334) 0:00:20.315 ********* 2025-08-29 14:32:17.238340 | orchestrator | ok: [testbed-manager] 2025-08-29 14:32:17.238351 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:32:17.238362 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:32:17.238372 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:32:17.238383 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:32:17.238393 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:32:17.238404 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:32:17.238415 | orchestrator | 2025-08-29 14:32:17.238425 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-08-29 14:32:17.238436 | orchestrator | Friday 29 August 2025 14:32:14 +0000 (0:00:00.576) 0:00:20.892 ********* 2025-08-29 14:32:17.238447 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:32:17.238458 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:32:17.238469 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:32:17.238484 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:32:17.238496 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:32:17.238506 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:32:17.238517 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:32:17.238528 | orchestrator | 2025-08-29 14:32:17.238538 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-08-29 14:32:17.238549 | orchestrator | Friday 29 August 2025 14:32:14 +0000 (0:00:00.249) 0:00:21.141 ********* 2025-08-29 14:32:17.238560 | orchestrator | ok: [testbed-manager] 2025-08-29 14:32:17.238571 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:32:17.238581 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:32:17.238592 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:32:17.238602 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:32:17.238613 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:32:17.238624 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:32:17.238635 | orchestrator | 2025-08-29 14:32:17.238645 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-08-29 14:32:17.238656 | orchestrator | Friday 29 August 2025 14:32:15 +0000 (0:00:01.055) 0:00:22.197 ********* 2025-08-29 14:32:17.238667 | orchestrator | ok: [testbed-manager] 2025-08-29 14:32:17.238677 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:32:17.238708 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:32:17.238719 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:32:17.238729 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:32:17.238740 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:32:17.238750 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:32:17.238761 | orchestrator | 2025-08-29 14:32:17.238772 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-08-29 14:32:17.238783 | orchestrator | Friday 29 August 2025 14:32:16 +0000 (0:00:00.585) 0:00:22.782 ********* 2025-08-29 14:32:17.238794 | orchestrator | ok: [testbed-manager] 2025-08-29 14:32:17.238805 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:32:17.238816 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:32:17.238826 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:32:17.238847 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:33:00.921164 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:33:00.921302 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:33:00.921331 | orchestrator | 2025-08-29 14:33:00.921345 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-08-29 14:33:00.921358 | orchestrator | Friday 29 August 2025 14:32:17 +0000 (0:00:01.029) 0:00:23.812 ********* 2025-08-29 14:33:00.921369 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:33:00.921404 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:33:00.921416 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:33:00.921427 | orchestrator | changed: [testbed-manager] 2025-08-29 14:33:00.921438 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:33:00.921448 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:33:00.921459 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:33:00.921470 | orchestrator | 2025-08-29 14:33:00.921481 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-08-29 14:33:00.921492 | orchestrator | Friday 29 August 2025 14:32:35 +0000 (0:00:18.194) 0:00:42.007 ********* 2025-08-29 14:33:00.921503 | orchestrator | ok: [testbed-manager] 2025-08-29 14:33:00.921514 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:33:00.921524 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:33:00.921535 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:33:00.921546 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:33:00.921556 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:33:00.921567 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:33:00.921577 | orchestrator | 2025-08-29 14:33:00.921588 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-08-29 14:33:00.921599 | orchestrator | Friday 29 August 2025 14:32:35 +0000 (0:00:00.240) 0:00:42.247 ********* 2025-08-29 14:33:00.921610 | orchestrator | ok: [testbed-manager] 2025-08-29 14:33:00.921620 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:33:00.921631 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:33:00.921641 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:33:00.921652 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:33:00.921722 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:33:00.921735 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:33:00.921748 | orchestrator | 2025-08-29 14:33:00.921760 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-08-29 14:33:00.921773 | orchestrator | Friday 29 August 2025 14:32:35 +0000 (0:00:00.248) 0:00:42.495 ********* 2025-08-29 14:33:00.921787 | orchestrator | ok: [testbed-manager] 2025-08-29 14:33:00.921806 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:33:00.921825 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:33:00.921844 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:33:00.921861 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:33:00.921880 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:33:00.921898 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:33:00.921916 | orchestrator | 2025-08-29 14:33:00.921935 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-08-29 14:33:00.921954 | orchestrator | Friday 29 August 2025 14:32:36 +0000 (0:00:00.281) 0:00:42.777 ********* 2025-08-29 14:33:00.921974 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:33:00.921997 | orchestrator | 2025-08-29 14:33:00.922083 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-08-29 14:33:00.922100 | orchestrator | Friday 29 August 2025 14:32:36 +0000 (0:00:00.332) 0:00:43.109 ********* 2025-08-29 14:33:00.922111 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:33:00.922122 | orchestrator | ok: [testbed-manager] 2025-08-29 14:33:00.922133 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:33:00.922143 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:33:00.922154 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:33:00.922165 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:33:00.922176 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:33:00.922186 | orchestrator | 2025-08-29 14:33:00.922197 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-08-29 14:33:00.922208 | orchestrator | Friday 29 August 2025 14:32:38 +0000 (0:00:01.670) 0:00:44.779 ********* 2025-08-29 14:33:00.922219 | orchestrator | changed: [testbed-manager] 2025-08-29 14:33:00.922230 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:33:00.922241 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:33:00.922266 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:33:00.922277 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:33:00.922288 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:33:00.922312 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:33:00.922324 | orchestrator | 2025-08-29 14:33:00.922335 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-08-29 14:33:00.922346 | orchestrator | Friday 29 August 2025 14:32:39 +0000 (0:00:01.186) 0:00:45.965 ********* 2025-08-29 14:33:00.922357 | orchestrator | ok: [testbed-manager] 2025-08-29 14:33:00.922368 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:33:00.922378 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:33:00.922390 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:33:00.922400 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:33:00.922411 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:33:00.922422 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:33:00.922433 | orchestrator | 2025-08-29 14:33:00.922444 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-08-29 14:33:00.922454 | orchestrator | Friday 29 August 2025 14:32:40 +0000 (0:00:00.933) 0:00:46.899 ********* 2025-08-29 14:33:00.922467 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:33:00.922480 | orchestrator | 2025-08-29 14:33:00.922491 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-08-29 14:33:00.922503 | orchestrator | Friday 29 August 2025 14:32:40 +0000 (0:00:00.347) 0:00:47.247 ********* 2025-08-29 14:33:00.922514 | orchestrator | changed: [testbed-manager] 2025-08-29 14:33:00.922525 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:33:00.922535 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:33:00.922546 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:33:00.922558 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:33:00.922568 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:33:00.922579 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:33:00.922590 | orchestrator | 2025-08-29 14:33:00.922621 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-08-29 14:33:00.922632 | orchestrator | Friday 29 August 2025 14:32:41 +0000 (0:00:01.108) 0:00:48.355 ********* 2025-08-29 14:33:00.922643 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:33:00.922654 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:33:00.922705 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:33:00.922716 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:33:00.922727 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:33:00.922737 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:33:00.922748 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:33:00.922759 | orchestrator | 2025-08-29 14:33:00.922769 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-08-29 14:33:00.922780 | orchestrator | Friday 29 August 2025 14:32:42 +0000 (0:00:00.364) 0:00:48.720 ********* 2025-08-29 14:33:00.922791 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:33:00.922802 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:33:00.922812 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:33:00.922823 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:33:00.922834 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:33:00.922844 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:33:00.922855 | orchestrator | changed: [testbed-manager] 2025-08-29 14:33:00.922865 | orchestrator | 2025-08-29 14:33:00.922876 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-08-29 14:33:00.922887 | orchestrator | Friday 29 August 2025 14:32:55 +0000 (0:00:12.998) 0:01:01.719 ********* 2025-08-29 14:33:00.922898 | orchestrator | ok: [testbed-manager] 2025-08-29 14:33:00.922908 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:33:00.922919 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:33:00.922930 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:33:00.922948 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:33:00.922959 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:33:00.922970 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:33:00.922980 | orchestrator | 2025-08-29 14:33:00.922991 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-08-29 14:33:00.923002 | orchestrator | Friday 29 August 2025 14:32:56 +0000 (0:00:01.539) 0:01:03.258 ********* 2025-08-29 14:33:00.923013 | orchestrator | ok: [testbed-manager] 2025-08-29 14:33:00.923024 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:33:00.923034 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:33:00.923045 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:33:00.923056 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:33:00.923066 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:33:00.923077 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:33:00.923087 | orchestrator | 2025-08-29 14:33:00.923098 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-08-29 14:33:00.923109 | orchestrator | Friday 29 August 2025 14:32:57 +0000 (0:00:00.927) 0:01:04.186 ********* 2025-08-29 14:33:00.923120 | orchestrator | ok: [testbed-manager] 2025-08-29 14:33:00.923131 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:33:00.923141 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:33:00.923152 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:33:00.923162 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:33:00.923173 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:33:00.923184 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:33:00.923194 | orchestrator | 2025-08-29 14:33:00.923205 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-08-29 14:33:00.923216 | orchestrator | Friday 29 August 2025 14:32:57 +0000 (0:00:00.253) 0:01:04.439 ********* 2025-08-29 14:33:00.923227 | orchestrator | ok: [testbed-manager] 2025-08-29 14:33:00.923238 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:33:00.923248 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:33:00.923259 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:33:00.923269 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:33:00.923280 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:33:00.923291 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:33:00.923301 | orchestrator | 2025-08-29 14:33:00.923312 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-08-29 14:33:00.923323 | orchestrator | Friday 29 August 2025 14:32:58 +0000 (0:00:00.238) 0:01:04.678 ********* 2025-08-29 14:33:00.923334 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:33:00.923345 | orchestrator | 2025-08-29 14:33:00.923356 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-08-29 14:33:00.923367 | orchestrator | Friday 29 August 2025 14:32:58 +0000 (0:00:00.311) 0:01:04.989 ********* 2025-08-29 14:33:00.923378 | orchestrator | ok: [testbed-manager] 2025-08-29 14:33:00.923389 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:33:00.923400 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:33:00.923410 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:33:00.923421 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:33:00.923431 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:33:00.923442 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:33:00.923453 | orchestrator | 2025-08-29 14:33:00.923464 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-08-29 14:33:00.923475 | orchestrator | Friday 29 August 2025 14:33:00 +0000 (0:00:01.685) 0:01:06.674 ********* 2025-08-29 14:33:00.923485 | orchestrator | changed: [testbed-manager] 2025-08-29 14:33:00.923496 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:33:00.923507 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:33:00.923518 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:33:00.923528 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:33:00.923539 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:33:00.923557 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:33:00.923568 | orchestrator | 2025-08-29 14:33:00.923579 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-08-29 14:33:00.923589 | orchestrator | Friday 29 August 2025 14:33:00 +0000 (0:00:00.563) 0:01:07.238 ********* 2025-08-29 14:33:00.923600 | orchestrator | ok: [testbed-manager] 2025-08-29 14:33:00.923611 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:33:00.923622 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:33:00.923632 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:33:00.923643 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:33:00.923653 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:33:00.923680 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:33:00.923690 | orchestrator | 2025-08-29 14:33:00.923709 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-08-29 14:35:20.889390 | orchestrator | Friday 29 August 2025 14:33:00 +0000 (0:00:00.259) 0:01:07.497 ********* 2025-08-29 14:35:20.889503 | orchestrator | ok: [testbed-manager] 2025-08-29 14:35:20.889522 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:35:20.889533 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:35:20.889545 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:35:20.889556 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:35:20.889567 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:35:20.889577 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:35:20.889588 | orchestrator | 2025-08-29 14:35:20.889672 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-08-29 14:35:20.889684 | orchestrator | Friday 29 August 2025 14:33:02 +0000 (0:00:01.323) 0:01:08.820 ********* 2025-08-29 14:35:20.889695 | orchestrator | changed: [testbed-manager] 2025-08-29 14:35:20.889707 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:35:20.889718 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:35:20.889729 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:35:20.889739 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:35:20.889750 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:35:20.889781 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:35:20.889793 | orchestrator | 2025-08-29 14:35:20.889805 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-08-29 14:35:20.889816 | orchestrator | Friday 29 August 2025 14:33:04 +0000 (0:00:01.805) 0:01:10.626 ********* 2025-08-29 14:35:20.889827 | orchestrator | ok: [testbed-manager] 2025-08-29 14:35:20.889838 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:35:20.889849 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:35:20.889860 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:35:20.889871 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:35:20.889882 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:35:20.889892 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:35:20.889903 | orchestrator | 2025-08-29 14:35:20.889915 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-08-29 14:35:20.889934 | orchestrator | Friday 29 August 2025 14:33:06 +0000 (0:00:02.567) 0:01:13.193 ********* 2025-08-29 14:35:20.889954 | orchestrator | ok: [testbed-manager] 2025-08-29 14:35:20.889972 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:35:20.889990 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:35:20.890008 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:35:20.890095 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:35:20.890108 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:35:20.890120 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:35:20.890132 | orchestrator | 2025-08-29 14:35:20.890144 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-08-29 14:35:20.890157 | orchestrator | Friday 29 August 2025 14:33:44 +0000 (0:00:38.244) 0:01:51.438 ********* 2025-08-29 14:35:20.890169 | orchestrator | changed: [testbed-manager] 2025-08-29 14:35:20.890181 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:35:20.890193 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:35:20.890205 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:35:20.890217 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:35:20.890229 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:35:20.890265 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:35:20.890277 | orchestrator | 2025-08-29 14:35:20.890289 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-08-29 14:35:20.890301 | orchestrator | Friday 29 August 2025 14:34:59 +0000 (0:01:14.924) 0:03:06.362 ********* 2025-08-29 14:35:20.890313 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:35:20.890324 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:35:20.890334 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:35:20.890346 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:35:20.890356 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:35:20.890367 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:35:20.890377 | orchestrator | ok: [testbed-manager] 2025-08-29 14:35:20.890388 | orchestrator | 2025-08-29 14:35:20.890399 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-08-29 14:35:20.890410 | orchestrator | Friday 29 August 2025 14:35:02 +0000 (0:00:02.436) 0:03:08.798 ********* 2025-08-29 14:35:20.890421 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:35:20.890431 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:35:20.890442 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:35:20.890452 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:35:20.890463 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:35:20.890473 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:35:20.890484 | orchestrator | changed: [testbed-manager] 2025-08-29 14:35:20.890494 | orchestrator | 2025-08-29 14:35:20.890505 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-08-29 14:35:20.890522 | orchestrator | Friday 29 August 2025 14:35:14 +0000 (0:00:12.687) 0:03:21.486 ********* 2025-08-29 14:35:20.890541 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-08-29 14:35:20.890558 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-08-29 14:35:20.890622 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-08-29 14:35:20.890682 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-08-29 14:35:20.890712 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-08-29 14:35:20.890729 | orchestrator | 2025-08-29 14:35:20.890746 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-08-29 14:35:20.890778 | orchestrator | Friday 29 August 2025 14:35:15 +0000 (0:00:00.375) 0:03:21.861 ********* 2025-08-29 14:35:20.890795 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-08-29 14:35:20.890813 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:35:20.890830 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-08-29 14:35:20.890849 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-08-29 14:35:20.890868 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:35:20.890885 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:35:20.890900 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-08-29 14:35:20.890911 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:35:20.890927 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-08-29 14:35:20.890950 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-08-29 14:35:20.890975 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-08-29 14:35:20.890993 | orchestrator | 2025-08-29 14:35:20.891012 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-08-29 14:35:20.891031 | orchestrator | Friday 29 August 2025 14:35:16 +0000 (0:00:00.752) 0:03:22.614 ********* 2025-08-29 14:35:20.891049 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-08-29 14:35:20.891066 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-08-29 14:35:20.891076 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-08-29 14:35:20.891087 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-08-29 14:35:20.891098 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-08-29 14:35:20.891108 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-08-29 14:35:20.891119 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-08-29 14:35:20.891129 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-08-29 14:35:20.891147 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-08-29 14:35:20.891158 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-08-29 14:35:20.891169 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:35:20.891180 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-08-29 14:35:20.891190 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-08-29 14:35:20.891201 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-08-29 14:35:20.891212 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-08-29 14:35:20.891223 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-08-29 14:35:20.891233 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-08-29 14:35:20.891244 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-08-29 14:35:20.891254 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-08-29 14:35:20.891265 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-08-29 14:35:20.891276 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-08-29 14:35:20.891320 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-08-29 14:35:23.246475 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-08-29 14:35:23.246584 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-08-29 14:35:23.246627 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-08-29 14:35:23.246638 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-08-29 14:35:23.246648 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-08-29 14:35:23.246659 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-08-29 14:35:23.246668 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-08-29 14:35:23.246679 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-08-29 14:35:23.246689 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:35:23.246700 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-08-29 14:35:23.246710 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-08-29 14:35:23.246719 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:35:23.246729 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-08-29 14:35:23.246739 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-08-29 14:35:23.246749 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-08-29 14:35:23.246758 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-08-29 14:35:23.246768 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-08-29 14:35:23.246777 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-08-29 14:35:23.246787 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-08-29 14:35:23.246796 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-08-29 14:35:23.246806 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-08-29 14:35:23.246828 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:35:23.246848 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-08-29 14:35:23.246859 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-08-29 14:35:23.246869 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-08-29 14:35:23.246878 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-08-29 14:35:23.246888 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-08-29 14:35:23.246897 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-08-29 14:35:23.246907 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-08-29 14:35:23.246917 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-08-29 14:35:23.246927 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-08-29 14:35:23.246936 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-08-29 14:35:23.246946 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-08-29 14:35:23.246979 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-08-29 14:35:23.246989 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-08-29 14:35:23.246999 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-08-29 14:35:23.247010 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-08-29 14:35:23.247021 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-08-29 14:35:23.247032 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-08-29 14:35:23.247043 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-08-29 14:35:23.247054 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-08-29 14:35:23.247064 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-08-29 14:35:23.247075 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-08-29 14:35:23.247112 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-08-29 14:35:23.247130 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-08-29 14:35:23.247147 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-08-29 14:35:23.247165 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-08-29 14:35:23.247183 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-08-29 14:35:23.247201 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-08-29 14:35:23.247214 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-08-29 14:35:23.247225 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-08-29 14:35:23.247235 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-08-29 14:35:23.247246 | orchestrator | 2025-08-29 14:35:23.247257 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-08-29 14:35:23.247268 | orchestrator | Friday 29 August 2025 14:35:20 +0000 (0:00:04.848) 0:03:27.462 ********* 2025-08-29 14:35:23.247279 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-08-29 14:35:23.247290 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-08-29 14:35:23.247300 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-08-29 14:35:23.247311 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-08-29 14:35:23.247321 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-08-29 14:35:23.247331 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-08-29 14:35:23.247359 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-08-29 14:35:23.247370 | orchestrator | 2025-08-29 14:35:23.247379 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-08-29 14:35:23.247389 | orchestrator | Friday 29 August 2025 14:35:21 +0000 (0:00:00.645) 0:03:28.107 ********* 2025-08-29 14:35:23.247398 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-08-29 14:35:23.247408 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-08-29 14:35:23.247418 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:35:23.247435 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-08-29 14:35:23.247445 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:35:23.247454 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:35:23.247464 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-08-29 14:35:23.247474 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:35:23.247483 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-08-29 14:35:23.247493 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-08-29 14:35:23.247503 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-08-29 14:35:23.247512 | orchestrator | 2025-08-29 14:35:23.247522 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-08-29 14:35:23.247536 | orchestrator | Friday 29 August 2025 14:35:22 +0000 (0:00:00.640) 0:03:28.748 ********* 2025-08-29 14:35:23.247546 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-08-29 14:35:23.247556 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:35:23.247565 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-08-29 14:35:23.247575 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:35:23.247584 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-08-29 14:35:23.247617 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-08-29 14:35:23.247627 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:35:23.247636 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:35:23.247646 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-08-29 14:35:23.247655 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-08-29 14:35:23.247665 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-08-29 14:35:23.247675 | orchestrator | 2025-08-29 14:35:23.247684 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-08-29 14:35:23.247694 | orchestrator | Friday 29 August 2025 14:35:22 +0000 (0:00:00.797) 0:03:29.545 ********* 2025-08-29 14:35:23.247703 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:35:23.247713 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:35:23.247722 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:35:23.247732 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:35:23.247741 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:35:23.247757 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:35:36.174370 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:35:36.174554 | orchestrator | 2025-08-29 14:35:36.174684 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-08-29 14:35:36.174717 | orchestrator | Friday 29 August 2025 14:35:23 +0000 (0:00:00.285) 0:03:29.830 ********* 2025-08-29 14:35:36.174736 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:35:36.174756 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:35:36.174774 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:35:36.174791 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:35:36.174810 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:35:36.174827 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:35:36.174846 | orchestrator | ok: [testbed-manager] 2025-08-29 14:35:36.174864 | orchestrator | 2025-08-29 14:35:36.174883 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-08-29 14:35:36.174904 | orchestrator | Friday 29 August 2025 14:35:29 +0000 (0:00:06.736) 0:03:36.566 ********* 2025-08-29 14:35:36.174923 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-08-29 14:35:36.174988 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:35:36.175009 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-08-29 14:35:36.175028 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-08-29 14:35:36.175046 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:35:36.175064 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-08-29 14:35:36.175082 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:35:36.175099 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-08-29 14:35:36.175117 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:35:36.175133 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-08-29 14:35:36.175150 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:35:36.175169 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:35:36.175187 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-08-29 14:35:36.175204 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:35:36.175229 | orchestrator | 2025-08-29 14:35:36.175248 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-08-29 14:35:36.175266 | orchestrator | Friday 29 August 2025 14:35:30 +0000 (0:00:00.329) 0:03:36.896 ********* 2025-08-29 14:35:36.175284 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-08-29 14:35:36.175302 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-08-29 14:35:36.175320 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-08-29 14:35:36.175337 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-08-29 14:35:36.175355 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-08-29 14:35:36.175373 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-08-29 14:35:36.175391 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-08-29 14:35:36.175409 | orchestrator | 2025-08-29 14:35:36.175428 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-08-29 14:35:36.175447 | orchestrator | Friday 29 August 2025 14:35:31 +0000 (0:00:01.054) 0:03:37.951 ********* 2025-08-29 14:35:36.175469 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:35:36.175492 | orchestrator | 2025-08-29 14:35:36.175510 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-08-29 14:35:36.175530 | orchestrator | Friday 29 August 2025 14:35:31 +0000 (0:00:00.543) 0:03:38.495 ********* 2025-08-29 14:35:36.175547 | orchestrator | ok: [testbed-manager] 2025-08-29 14:35:36.175566 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:35:36.175613 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:35:36.175633 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:35:36.175651 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:35:36.175669 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:35:36.175686 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:35:36.175704 | orchestrator | 2025-08-29 14:35:36.175723 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-08-29 14:35:36.175741 | orchestrator | Friday 29 August 2025 14:35:33 +0000 (0:00:01.294) 0:03:39.789 ********* 2025-08-29 14:35:36.175759 | orchestrator | ok: [testbed-manager] 2025-08-29 14:35:36.175777 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:35:36.175820 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:35:36.175838 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:35:36.175856 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:35:36.175874 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:35:36.175892 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:35:36.175909 | orchestrator | 2025-08-29 14:35:36.175926 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-08-29 14:35:36.175944 | orchestrator | Friday 29 August 2025 14:35:33 +0000 (0:00:00.605) 0:03:40.394 ********* 2025-08-29 14:35:36.175962 | orchestrator | changed: [testbed-manager] 2025-08-29 14:35:36.175979 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:35:36.175996 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:35:36.176031 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:35:36.176049 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:35:36.176067 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:35:36.176084 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:35:36.176102 | orchestrator | 2025-08-29 14:35:36.176121 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-08-29 14:35:36.176139 | orchestrator | Friday 29 August 2025 14:35:34 +0000 (0:00:00.661) 0:03:41.056 ********* 2025-08-29 14:35:36.176158 | orchestrator | ok: [testbed-manager] 2025-08-29 14:35:36.176177 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:35:36.176195 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:35:36.176236 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:35:36.176269 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:35:36.176288 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:35:36.176305 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:35:36.176323 | orchestrator | 2025-08-29 14:35:36.176342 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-08-29 14:35:36.176362 | orchestrator | Friday 29 August 2025 14:35:35 +0000 (0:00:00.615) 0:03:41.672 ********* 2025-08-29 14:35:36.176423 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756476592.8949764, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 14:35:36.176449 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756476609.0232036, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 14:35:36.176469 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756476618.667974, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 14:35:36.176488 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756476617.993821, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 14:35:36.176507 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756476620.3107476, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 14:35:36.176551 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756476619.4966195, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 14:35:36.176572 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1756476638.2998395, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 14:35:36.176645 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 14:35:53.623957 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 14:35:53.624088 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 14:35:53.624104 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 14:35:53.624116 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 14:35:53.624152 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 14:35:53.624165 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 14:35:53.624177 | orchestrator | 2025-08-29 14:35:53.624191 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-08-29 14:35:53.624205 | orchestrator | Friday 29 August 2025 14:35:36 +0000 (0:00:01.071) 0:03:42.744 ********* 2025-08-29 14:35:53.624216 | orchestrator | changed: [testbed-manager] 2025-08-29 14:35:53.624228 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:35:53.624239 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:35:53.624250 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:35:53.624260 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:35:53.624271 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:35:53.624282 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:35:53.624293 | orchestrator | 2025-08-29 14:35:53.624303 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-08-29 14:35:53.624331 | orchestrator | Friday 29 August 2025 14:35:37 +0000 (0:00:01.199) 0:03:43.944 ********* 2025-08-29 14:35:53.624343 | orchestrator | changed: [testbed-manager] 2025-08-29 14:35:53.624354 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:35:53.624364 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:35:53.624375 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:35:53.624402 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:35:53.624413 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:35:53.624424 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:35:53.624435 | orchestrator | 2025-08-29 14:35:53.624446 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-08-29 14:35:53.624457 | orchestrator | Friday 29 August 2025 14:35:38 +0000 (0:00:01.387) 0:03:45.331 ********* 2025-08-29 14:35:53.624468 | orchestrator | changed: [testbed-manager] 2025-08-29 14:35:53.624480 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:35:53.624492 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:35:53.624504 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:35:53.624515 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:35:53.624527 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:35:53.624539 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:35:53.624550 | orchestrator | 2025-08-29 14:35:53.624562 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-08-29 14:35:53.624800 | orchestrator | Friday 29 August 2025 14:35:39 +0000 (0:00:01.202) 0:03:46.533 ********* 2025-08-29 14:35:53.624955 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:35:53.624972 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:35:53.624984 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:35:53.624995 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:35:53.625006 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:35:53.625017 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:35:53.625028 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:35:53.625075 | orchestrator | 2025-08-29 14:35:53.625088 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-08-29 14:35:53.625100 | orchestrator | Friday 29 August 2025 14:35:40 +0000 (0:00:00.268) 0:03:46.802 ********* 2025-08-29 14:35:53.625111 | orchestrator | ok: [testbed-manager] 2025-08-29 14:35:53.625124 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:35:53.625135 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:35:53.625146 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:35:53.625157 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:35:53.625168 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:35:53.625179 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:35:53.625189 | orchestrator | 2025-08-29 14:35:53.625200 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-08-29 14:35:53.625211 | orchestrator | Friday 29 August 2025 14:35:40 +0000 (0:00:00.765) 0:03:47.568 ********* 2025-08-29 14:35:53.625225 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:35:53.625239 | orchestrator | 2025-08-29 14:35:53.625251 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-08-29 14:35:53.625262 | orchestrator | Friday 29 August 2025 14:35:41 +0000 (0:00:00.408) 0:03:47.976 ********* 2025-08-29 14:35:53.625272 | orchestrator | ok: [testbed-manager] 2025-08-29 14:35:53.625283 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:35:53.625294 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:35:53.625305 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:35:53.625316 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:35:53.625326 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:35:53.625337 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:35:53.625348 | orchestrator | 2025-08-29 14:35:53.625359 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-08-29 14:35:53.625370 | orchestrator | Friday 29 August 2025 14:35:50 +0000 (0:00:08.883) 0:03:56.860 ********* 2025-08-29 14:35:53.625380 | orchestrator | ok: [testbed-manager] 2025-08-29 14:35:53.625391 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:35:53.625402 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:35:53.625413 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:35:53.625424 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:35:53.625434 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:35:53.625445 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:35:53.625455 | orchestrator | 2025-08-29 14:35:53.625487 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-08-29 14:35:53.625498 | orchestrator | Friday 29 August 2025 14:35:51 +0000 (0:00:01.230) 0:03:58.090 ********* 2025-08-29 14:35:53.625510 | orchestrator | ok: [testbed-manager] 2025-08-29 14:35:53.625520 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:35:53.625531 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:35:53.625542 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:35:53.625552 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:35:53.625563 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:35:53.625606 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:35:53.625619 | orchestrator | 2025-08-29 14:35:53.625630 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-08-29 14:35:53.625641 | orchestrator | Friday 29 August 2025 14:35:52 +0000 (0:00:01.047) 0:03:59.138 ********* 2025-08-29 14:35:53.625652 | orchestrator | ok: [testbed-manager] 2025-08-29 14:35:53.625663 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:35:53.625673 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:35:53.625684 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:35:53.625695 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:35:53.625706 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:35:53.625716 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:35:53.625727 | orchestrator | 2025-08-29 14:35:53.625738 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-08-29 14:35:53.625759 | orchestrator | Friday 29 August 2025 14:35:52 +0000 (0:00:00.430) 0:03:59.568 ********* 2025-08-29 14:35:53.625770 | orchestrator | ok: [testbed-manager] 2025-08-29 14:35:53.625781 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:35:53.625792 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:35:53.625802 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:35:53.625813 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:35:53.625824 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:35:53.625834 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:35:53.625845 | orchestrator | 2025-08-29 14:35:53.625856 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-08-29 14:35:53.625867 | orchestrator | Friday 29 August 2025 14:35:53 +0000 (0:00:00.312) 0:03:59.881 ********* 2025-08-29 14:35:53.625878 | orchestrator | ok: [testbed-manager] 2025-08-29 14:35:53.625889 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:35:53.625900 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:35:53.625911 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:35:53.625921 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:35:53.625971 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:37:05.869158 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:37:05.869286 | orchestrator | 2025-08-29 14:37:05.869304 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-08-29 14:37:05.869317 | orchestrator | Friday 29 August 2025 14:35:53 +0000 (0:00:00.320) 0:04:00.202 ********* 2025-08-29 14:37:05.869329 | orchestrator | ok: [testbed-manager] 2025-08-29 14:37:05.869339 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:37:05.869351 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:37:05.869361 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:37:05.869373 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:37:05.869383 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:37:05.869394 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:37:05.869404 | orchestrator | 2025-08-29 14:37:05.869415 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-08-29 14:37:05.869426 | orchestrator | Friday 29 August 2025 14:35:59 +0000 (0:00:05.828) 0:04:06.031 ********* 2025-08-29 14:37:05.869440 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:37:05.869453 | orchestrator | 2025-08-29 14:37:05.869465 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-08-29 14:37:05.869477 | orchestrator | Friday 29 August 2025 14:35:59 +0000 (0:00:00.495) 0:04:06.526 ********* 2025-08-29 14:37:05.869489 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-08-29 14:37:05.869501 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-08-29 14:37:05.869513 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:37:05.869525 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-08-29 14:37:05.869587 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-08-29 14:37:05.869599 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-08-29 14:37:05.869610 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-08-29 14:37:05.869621 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:37:05.869631 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:37:05.869642 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-08-29 14:37:05.869652 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-08-29 14:37:05.869663 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-08-29 14:37:05.869674 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-08-29 14:37:05.869686 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-08-29 14:37:05.869699 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-08-29 14:37:05.869712 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:37:05.869724 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:37:05.869759 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:37:05.869772 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-08-29 14:37:05.869785 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-08-29 14:37:05.869796 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:37:05.869808 | orchestrator | 2025-08-29 14:37:05.869821 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-08-29 14:37:05.869833 | orchestrator | Friday 29 August 2025 14:36:00 +0000 (0:00:00.361) 0:04:06.887 ********* 2025-08-29 14:37:05.869846 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:37:05.869857 | orchestrator | 2025-08-29 14:37:05.869882 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-08-29 14:37:05.869893 | orchestrator | Friday 29 August 2025 14:36:00 +0000 (0:00:00.426) 0:04:07.314 ********* 2025-08-29 14:37:05.869903 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-08-29 14:37:05.869914 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-08-29 14:37:05.869926 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:37:05.869937 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:37:05.869947 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-08-29 14:37:05.869958 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-08-29 14:37:05.869968 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:37:05.869979 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:37:05.869989 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-08-29 14:37:05.870000 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-08-29 14:37:05.870010 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:37:05.870169 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:37:05.870182 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-08-29 14:37:05.870192 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:37:05.870203 | orchestrator | 2025-08-29 14:37:05.870213 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-08-29 14:37:05.870224 | orchestrator | Friday 29 August 2025 14:36:01 +0000 (0:00:00.360) 0:04:07.674 ********* 2025-08-29 14:37:05.870235 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:37:05.870246 | orchestrator | 2025-08-29 14:37:05.870256 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-08-29 14:37:05.870267 | orchestrator | Friday 29 August 2025 14:36:01 +0000 (0:00:00.407) 0:04:08.082 ********* 2025-08-29 14:37:05.870278 | orchestrator | changed: [testbed-manager] 2025-08-29 14:37:05.870307 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:37:05.870319 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:37:05.870330 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:37:05.870340 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:37:05.870350 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:37:05.870361 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:37:05.870371 | orchestrator | 2025-08-29 14:37:05.870382 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-08-29 14:37:05.870393 | orchestrator | Friday 29 August 2025 14:36:36 +0000 (0:00:34.847) 0:04:42.930 ********* 2025-08-29 14:37:05.870403 | orchestrator | changed: [testbed-manager] 2025-08-29 14:37:05.870414 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:37:05.870424 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:37:05.870435 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:37:05.870445 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:37:05.870465 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:37:05.870476 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:37:05.870487 | orchestrator | 2025-08-29 14:37:05.870498 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-08-29 14:37:05.870508 | orchestrator | Friday 29 August 2025 14:36:45 +0000 (0:00:08.688) 0:04:51.618 ********* 2025-08-29 14:37:05.870519 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:37:05.870529 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:37:05.870562 | orchestrator | changed: [testbed-manager] 2025-08-29 14:37:05.870573 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:37:05.870584 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:37:05.870595 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:37:05.870605 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:37:05.870616 | orchestrator | 2025-08-29 14:37:05.870626 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-08-29 14:37:05.870637 | orchestrator | Friday 29 August 2025 14:36:52 +0000 (0:00:07.779) 0:04:59.398 ********* 2025-08-29 14:37:05.870648 | orchestrator | ok: [testbed-manager] 2025-08-29 14:37:05.870658 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:37:05.870669 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:37:05.870680 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:37:05.870690 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:37:05.870701 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:37:05.870711 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:37:05.870722 | orchestrator | 2025-08-29 14:37:05.870733 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-08-29 14:37:05.870744 | orchestrator | Friday 29 August 2025 14:36:54 +0000 (0:00:01.911) 0:05:01.310 ********* 2025-08-29 14:37:05.870755 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:37:05.870765 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:37:05.870776 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:37:05.870787 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:37:05.870797 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:37:05.870808 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:37:05.870818 | orchestrator | changed: [testbed-manager] 2025-08-29 14:37:05.870829 | orchestrator | 2025-08-29 14:37:05.870839 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-08-29 14:37:05.870850 | orchestrator | Friday 29 August 2025 14:37:01 +0000 (0:00:06.699) 0:05:08.009 ********* 2025-08-29 14:37:05.870861 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:37:05.870874 | orchestrator | 2025-08-29 14:37:05.870885 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-08-29 14:37:05.870895 | orchestrator | Friday 29 August 2025 14:37:01 +0000 (0:00:00.561) 0:05:08.570 ********* 2025-08-29 14:37:05.870906 | orchestrator | changed: [testbed-manager] 2025-08-29 14:37:05.870917 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:37:05.870927 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:37:05.870944 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:37:05.870954 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:37:05.870965 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:37:05.870975 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:37:05.870986 | orchestrator | 2025-08-29 14:37:05.870997 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-08-29 14:37:05.871007 | orchestrator | Friday 29 August 2025 14:37:02 +0000 (0:00:00.736) 0:05:09.307 ********* 2025-08-29 14:37:05.871018 | orchestrator | ok: [testbed-manager] 2025-08-29 14:37:05.871029 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:37:05.871039 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:37:05.871050 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:37:05.871061 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:37:05.871071 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:37:05.871089 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:37:05.871100 | orchestrator | 2025-08-29 14:37:05.871110 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-08-29 14:37:05.871121 | orchestrator | Friday 29 August 2025 14:37:04 +0000 (0:00:02.040) 0:05:11.347 ********* 2025-08-29 14:37:05.871132 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:37:05.871142 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:37:05.871153 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:37:05.871164 | orchestrator | changed: [testbed-manager] 2025-08-29 14:37:05.871174 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:37:05.871185 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:37:05.871195 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:37:05.871206 | orchestrator | 2025-08-29 14:37:05.871217 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-08-29 14:37:05.871227 | orchestrator | Friday 29 August 2025 14:37:05 +0000 (0:00:00.830) 0:05:12.178 ********* 2025-08-29 14:37:05.871238 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:37:05.871248 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:37:05.871259 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:37:05.871270 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:37:05.871280 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:37:05.871291 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:37:05.871301 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:37:05.871312 | orchestrator | 2025-08-29 14:37:05.871322 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-08-29 14:37:05.871340 | orchestrator | Friday 29 August 2025 14:37:05 +0000 (0:00:00.271) 0:05:12.449 ********* 2025-08-29 14:37:33.318643 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:37:33.318757 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:37:33.318772 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:37:33.318783 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:37:33.318793 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:37:33.318803 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:37:33.318813 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:37:33.318823 | orchestrator | 2025-08-29 14:37:33.318834 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-08-29 14:37:33.318853 | orchestrator | Friday 29 August 2025 14:37:06 +0000 (0:00:00.411) 0:05:12.861 ********* 2025-08-29 14:37:33.318869 | orchestrator | ok: [testbed-manager] 2025-08-29 14:37:33.318885 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:37:33.318901 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:37:33.318917 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:37:33.318933 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:37:33.318949 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:37:33.318964 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:37:33.318974 | orchestrator | 2025-08-29 14:37:33.318984 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-08-29 14:37:33.318994 | orchestrator | Friday 29 August 2025 14:37:06 +0000 (0:00:00.300) 0:05:13.162 ********* 2025-08-29 14:37:33.319004 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:37:33.319013 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:37:33.319023 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:37:33.319033 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:37:33.319043 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:37:33.319053 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:37:33.319062 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:37:33.319072 | orchestrator | 2025-08-29 14:37:33.319082 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-08-29 14:37:33.319093 | orchestrator | Friday 29 August 2025 14:37:06 +0000 (0:00:00.286) 0:05:13.448 ********* 2025-08-29 14:37:33.319102 | orchestrator | ok: [testbed-manager] 2025-08-29 14:37:33.319112 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:37:33.319121 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:37:33.319132 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:37:33.319169 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:37:33.319180 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:37:33.319190 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:37:33.319201 | orchestrator | 2025-08-29 14:37:33.319212 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-08-29 14:37:33.319223 | orchestrator | Friday 29 August 2025 14:37:07 +0000 (0:00:00.328) 0:05:13.776 ********* 2025-08-29 14:37:33.319234 | orchestrator | ok: [testbed-manager] =>  2025-08-29 14:37:33.319244 | orchestrator |  docker_version: 5:27.5.1 2025-08-29 14:37:33.319255 | orchestrator | ok: [testbed-node-0] =>  2025-08-29 14:37:33.319265 | orchestrator |  docker_version: 5:27.5.1 2025-08-29 14:37:33.319276 | orchestrator | ok: [testbed-node-1] =>  2025-08-29 14:37:33.319287 | orchestrator |  docker_version: 5:27.5.1 2025-08-29 14:37:33.319297 | orchestrator | ok: [testbed-node-2] =>  2025-08-29 14:37:33.319307 | orchestrator |  docker_version: 5:27.5.1 2025-08-29 14:37:33.319318 | orchestrator | ok: [testbed-node-3] =>  2025-08-29 14:37:33.319328 | orchestrator |  docker_version: 5:27.5.1 2025-08-29 14:37:33.319339 | orchestrator | ok: [testbed-node-4] =>  2025-08-29 14:37:33.319350 | orchestrator |  docker_version: 5:27.5.1 2025-08-29 14:37:33.319361 | orchestrator | ok: [testbed-node-5] =>  2025-08-29 14:37:33.319371 | orchestrator |  docker_version: 5:27.5.1 2025-08-29 14:37:33.319382 | orchestrator | 2025-08-29 14:37:33.319394 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-08-29 14:37:33.319405 | orchestrator | Friday 29 August 2025 14:37:07 +0000 (0:00:00.337) 0:05:14.114 ********* 2025-08-29 14:37:33.319416 | orchestrator | ok: [testbed-manager] =>  2025-08-29 14:37:33.319427 | orchestrator |  docker_cli_version: 5:27.5.1 2025-08-29 14:37:33.319437 | orchestrator | ok: [testbed-node-0] =>  2025-08-29 14:37:33.319448 | orchestrator |  docker_cli_version: 5:27.5.1 2025-08-29 14:37:33.319458 | orchestrator | ok: [testbed-node-1] =>  2025-08-29 14:37:33.319469 | orchestrator |  docker_cli_version: 5:27.5.1 2025-08-29 14:37:33.319480 | orchestrator | ok: [testbed-node-2] =>  2025-08-29 14:37:33.319491 | orchestrator |  docker_cli_version: 5:27.5.1 2025-08-29 14:37:33.319500 | orchestrator | ok: [testbed-node-3] =>  2025-08-29 14:37:33.319510 | orchestrator |  docker_cli_version: 5:27.5.1 2025-08-29 14:37:33.319587 | orchestrator | ok: [testbed-node-4] =>  2025-08-29 14:37:33.319599 | orchestrator |  docker_cli_version: 5:27.5.1 2025-08-29 14:37:33.319609 | orchestrator | ok: [testbed-node-5] =>  2025-08-29 14:37:33.319618 | orchestrator |  docker_cli_version: 5:27.5.1 2025-08-29 14:37:33.319628 | orchestrator | 2025-08-29 14:37:33.319638 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-08-29 14:37:33.319648 | orchestrator | Friday 29 August 2025 14:37:07 +0000 (0:00:00.306) 0:05:14.420 ********* 2025-08-29 14:37:33.319657 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:37:33.319667 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:37:33.319676 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:37:33.319686 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:37:33.319695 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:37:33.319705 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:37:33.319714 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:37:33.319724 | orchestrator | 2025-08-29 14:37:33.319734 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-08-29 14:37:33.319744 | orchestrator | Friday 29 August 2025 14:37:08 +0000 (0:00:00.353) 0:05:14.773 ********* 2025-08-29 14:37:33.319753 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:37:33.319763 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:37:33.319772 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:37:33.319782 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:37:33.319791 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:37:33.319801 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:37:33.319810 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:37:33.319820 | orchestrator | 2025-08-29 14:37:33.319829 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-08-29 14:37:33.319849 | orchestrator | Friday 29 August 2025 14:37:08 +0000 (0:00:00.282) 0:05:15.056 ********* 2025-08-29 14:37:33.319876 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:37:33.319889 | orchestrator | 2025-08-29 14:37:33.319917 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-08-29 14:37:33.319928 | orchestrator | Friday 29 August 2025 14:37:08 +0000 (0:00:00.499) 0:05:15.555 ********* 2025-08-29 14:37:33.319937 | orchestrator | ok: [testbed-manager] 2025-08-29 14:37:33.319947 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:37:33.319957 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:37:33.319966 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:37:33.319976 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:37:33.319985 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:37:33.319995 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:37:33.320004 | orchestrator | 2025-08-29 14:37:33.320014 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-08-29 14:37:33.320024 | orchestrator | Friday 29 August 2025 14:37:09 +0000 (0:00:00.909) 0:05:16.465 ********* 2025-08-29 14:37:33.320034 | orchestrator | ok: [testbed-manager] 2025-08-29 14:37:33.320043 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:37:33.320053 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:37:33.320062 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:37:33.320072 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:37:33.320081 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:37:33.320090 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:37:33.320100 | orchestrator | 2025-08-29 14:37:33.320109 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-08-29 14:37:33.320121 | orchestrator | Friday 29 August 2025 14:37:13 +0000 (0:00:03.454) 0:05:19.919 ********* 2025-08-29 14:37:33.320130 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-08-29 14:37:33.320140 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-08-29 14:37:33.320150 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-08-29 14:37:33.320160 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-08-29 14:37:33.320169 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-08-29 14:37:33.320179 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:37:33.320188 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-08-29 14:37:33.320198 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-08-29 14:37:33.320207 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-08-29 14:37:33.320217 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-08-29 14:37:33.320226 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:37:33.320236 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-08-29 14:37:33.320245 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-08-29 14:37:33.320255 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-08-29 14:37:33.320264 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:37:33.320274 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-08-29 14:37:33.320283 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:37:33.320293 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-08-29 14:37:33.320302 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-08-29 14:37:33.320312 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-08-29 14:37:33.320321 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-08-29 14:37:33.320331 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-08-29 14:37:33.320341 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:37:33.320350 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:37:33.320367 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-08-29 14:37:33.320376 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-08-29 14:37:33.320386 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-08-29 14:37:33.320395 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:37:33.320405 | orchestrator | 2025-08-29 14:37:33.320419 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-08-29 14:37:33.320429 | orchestrator | Friday 29 August 2025 14:37:13 +0000 (0:00:00.643) 0:05:20.563 ********* 2025-08-29 14:37:33.320439 | orchestrator | ok: [testbed-manager] 2025-08-29 14:37:33.320448 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:37:33.320458 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:37:33.320467 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:37:33.320477 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:37:33.320486 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:37:33.320496 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:37:33.320505 | orchestrator | 2025-08-29 14:37:33.320515 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-08-29 14:37:33.320540 | orchestrator | Friday 29 August 2025 14:37:20 +0000 (0:00:06.536) 0:05:27.100 ********* 2025-08-29 14:37:33.320550 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:37:33.320559 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:37:33.320569 | orchestrator | ok: [testbed-manager] 2025-08-29 14:37:33.320578 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:37:33.320588 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:37:33.320597 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:37:33.320607 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:37:33.320616 | orchestrator | 2025-08-29 14:37:33.320626 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-08-29 14:37:33.320635 | orchestrator | Friday 29 August 2025 14:37:21 +0000 (0:00:01.270) 0:05:28.371 ********* 2025-08-29 14:37:33.320645 | orchestrator | ok: [testbed-manager] 2025-08-29 14:37:33.320654 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:37:33.320664 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:37:33.320673 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:37:33.320683 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:37:33.320692 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:37:33.320701 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:37:33.320711 | orchestrator | 2025-08-29 14:37:33.320721 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-08-29 14:37:33.320730 | orchestrator | Friday 29 August 2025 14:37:30 +0000 (0:00:08.314) 0:05:36.685 ********* 2025-08-29 14:37:33.320740 | orchestrator | changed: [testbed-manager] 2025-08-29 14:37:33.320749 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:37:33.320759 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:37:33.320775 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:38:18.618345 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:38:18.618456 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:38:18.618472 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:38:18.618485 | orchestrator | 2025-08-29 14:38:18.618563 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-08-29 14:38:18.618577 | orchestrator | Friday 29 August 2025 14:37:33 +0000 (0:00:03.210) 0:05:39.896 ********* 2025-08-29 14:38:18.618588 | orchestrator | ok: [testbed-manager] 2025-08-29 14:38:18.618600 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:38:18.618612 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:38:18.618623 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:38:18.618634 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:38:18.618645 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:38:18.618655 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:38:18.618666 | orchestrator | 2025-08-29 14:38:18.618677 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-08-29 14:38:18.618688 | orchestrator | Friday 29 August 2025 14:37:34 +0000 (0:00:01.403) 0:05:41.299 ********* 2025-08-29 14:38:18.618724 | orchestrator | ok: [testbed-manager] 2025-08-29 14:38:18.618736 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:38:18.618746 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:38:18.618757 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:38:18.618768 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:38:18.618778 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:38:18.618789 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:38:18.618799 | orchestrator | 2025-08-29 14:38:18.618810 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-08-29 14:38:18.618821 | orchestrator | Friday 29 August 2025 14:37:36 +0000 (0:00:01.520) 0:05:42.820 ********* 2025-08-29 14:38:18.618831 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:38:18.618842 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:38:18.618853 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:38:18.618863 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:38:18.618875 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:38:18.618887 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:38:18.618898 | orchestrator | changed: [testbed-manager] 2025-08-29 14:38:18.618910 | orchestrator | 2025-08-29 14:38:18.618922 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-08-29 14:38:18.618934 | orchestrator | Friday 29 August 2025 14:37:36 +0000 (0:00:00.613) 0:05:43.434 ********* 2025-08-29 14:38:18.618946 | orchestrator | ok: [testbed-manager] 2025-08-29 14:38:18.618958 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:38:18.618970 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:38:18.618982 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:38:18.618994 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:38:18.619006 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:38:18.619018 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:38:18.619030 | orchestrator | 2025-08-29 14:38:18.619042 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-08-29 14:38:18.619054 | orchestrator | Friday 29 August 2025 14:37:47 +0000 (0:00:10.485) 0:05:53.919 ********* 2025-08-29 14:38:18.619066 | orchestrator | changed: [testbed-manager] 2025-08-29 14:38:18.619078 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:38:18.619090 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:38:18.619101 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:38:18.619113 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:38:18.619125 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:38:18.619137 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:38:18.619149 | orchestrator | 2025-08-29 14:38:18.619161 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-08-29 14:38:18.619174 | orchestrator | Friday 29 August 2025 14:37:48 +0000 (0:00:00.930) 0:05:54.850 ********* 2025-08-29 14:38:18.619186 | orchestrator | ok: [testbed-manager] 2025-08-29 14:38:18.619198 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:38:18.619210 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:38:18.619222 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:38:18.619249 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:38:18.619260 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:38:18.619270 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:38:18.619281 | orchestrator | 2025-08-29 14:38:18.619292 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-08-29 14:38:18.619303 | orchestrator | Friday 29 August 2025 14:37:56 +0000 (0:00:08.521) 0:06:03.371 ********* 2025-08-29 14:38:18.619313 | orchestrator | ok: [testbed-manager] 2025-08-29 14:38:18.619324 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:38:18.619335 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:38:18.619345 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:38:18.619356 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:38:18.619367 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:38:18.619377 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:38:18.619388 | orchestrator | 2025-08-29 14:38:18.619406 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-08-29 14:38:18.619417 | orchestrator | Friday 29 August 2025 14:38:08 +0000 (0:00:11.468) 0:06:14.840 ********* 2025-08-29 14:38:18.619428 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-08-29 14:38:18.619439 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-08-29 14:38:18.619449 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-08-29 14:38:18.619460 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-08-29 14:38:18.619471 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-08-29 14:38:18.619481 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-08-29 14:38:18.619511 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-08-29 14:38:18.619523 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-08-29 14:38:18.619534 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-08-29 14:38:18.619544 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-08-29 14:38:18.619555 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-08-29 14:38:18.619565 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-08-29 14:38:18.619576 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-08-29 14:38:18.619587 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-08-29 14:38:18.619597 | orchestrator | 2025-08-29 14:38:18.619608 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-08-29 14:38:18.619636 | orchestrator | Friday 29 August 2025 14:38:09 +0000 (0:00:01.212) 0:06:16.052 ********* 2025-08-29 14:38:18.619648 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:38:18.619659 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:38:18.619670 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:38:18.619681 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:38:18.619691 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:38:18.619702 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:38:18.619713 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:38:18.619723 | orchestrator | 2025-08-29 14:38:18.619734 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-08-29 14:38:18.619745 | orchestrator | Friday 29 August 2025 14:38:09 +0000 (0:00:00.526) 0:06:16.579 ********* 2025-08-29 14:38:18.619756 | orchestrator | ok: [testbed-manager] 2025-08-29 14:38:18.619766 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:38:18.619777 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:38:18.619788 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:38:18.619798 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:38:18.619809 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:38:18.619819 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:38:18.619830 | orchestrator | 2025-08-29 14:38:18.619841 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-08-29 14:38:18.619853 | orchestrator | Friday 29 August 2025 14:38:14 +0000 (0:00:04.038) 0:06:20.618 ********* 2025-08-29 14:38:18.619864 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:38:18.619875 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:38:18.619885 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:38:18.619896 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:38:18.619906 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:38:18.619917 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:38:18.619928 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:38:18.619938 | orchestrator | 2025-08-29 14:38:18.619950 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-08-29 14:38:18.619961 | orchestrator | Friday 29 August 2025 14:38:14 +0000 (0:00:00.533) 0:06:21.151 ********* 2025-08-29 14:38:18.619972 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-08-29 14:38:18.619983 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-08-29 14:38:18.619994 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:38:18.620012 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-08-29 14:38:18.620023 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-08-29 14:38:18.620034 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:38:18.620045 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-08-29 14:38:18.620055 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-08-29 14:38:18.620066 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:38:18.620077 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-08-29 14:38:18.620088 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-08-29 14:38:18.620099 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:38:18.620110 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-08-29 14:38:18.620120 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-08-29 14:38:18.620131 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:38:18.620142 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-08-29 14:38:18.620153 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-08-29 14:38:18.620163 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:38:18.620174 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-08-29 14:38:18.620185 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-08-29 14:38:18.620201 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:38:18.620212 | orchestrator | 2025-08-29 14:38:18.620223 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-08-29 14:38:18.620233 | orchestrator | Friday 29 August 2025 14:38:15 +0000 (0:00:00.719) 0:06:21.871 ********* 2025-08-29 14:38:18.620244 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:38:18.620255 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:38:18.620265 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:38:18.620276 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:38:18.620287 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:38:18.620297 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:38:18.620307 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:38:18.620318 | orchestrator | 2025-08-29 14:38:18.620329 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-08-29 14:38:18.620340 | orchestrator | Friday 29 August 2025 14:38:15 +0000 (0:00:00.539) 0:06:22.411 ********* 2025-08-29 14:38:18.620351 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:38:18.620361 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:38:18.620372 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:38:18.620382 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:38:18.620393 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:38:18.620404 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:38:18.620414 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:38:18.620425 | orchestrator | 2025-08-29 14:38:18.620435 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-08-29 14:38:18.620446 | orchestrator | Friday 29 August 2025 14:38:16 +0000 (0:00:00.533) 0:06:22.944 ********* 2025-08-29 14:38:18.620457 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:38:18.620468 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:38:18.620478 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:38:18.620489 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:38:18.620516 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:38:18.620527 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:38:18.620537 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:38:18.620548 | orchestrator | 2025-08-29 14:38:18.620559 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-08-29 14:38:18.620570 | orchestrator | Friday 29 August 2025 14:38:16 +0000 (0:00:00.530) 0:06:23.475 ********* 2025-08-29 14:38:18.620581 | orchestrator | ok: [testbed-manager] 2025-08-29 14:38:18.620598 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:38:40.945899 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:38:40.946013 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:38:40.946089 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:38:40.946101 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:38:40.946112 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:38:40.946124 | orchestrator | 2025-08-29 14:38:40.946136 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-08-29 14:38:40.946149 | orchestrator | Friday 29 August 2025 14:38:18 +0000 (0:00:01.720) 0:06:25.195 ********* 2025-08-29 14:38:40.946161 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:38:40.946176 | orchestrator | 2025-08-29 14:38:40.946187 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-08-29 14:38:40.946198 | orchestrator | Friday 29 August 2025 14:38:19 +0000 (0:00:01.052) 0:06:26.247 ********* 2025-08-29 14:38:40.946209 | orchestrator | ok: [testbed-manager] 2025-08-29 14:38:40.946220 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:38:40.946231 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:38:40.946242 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:38:40.946253 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:38:40.946264 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:38:40.946275 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:38:40.946285 | orchestrator | 2025-08-29 14:38:40.946297 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-08-29 14:38:40.946308 | orchestrator | Friday 29 August 2025 14:38:20 +0000 (0:00:00.920) 0:06:27.168 ********* 2025-08-29 14:38:40.946318 | orchestrator | ok: [testbed-manager] 2025-08-29 14:38:40.946329 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:38:40.946340 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:38:40.946351 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:38:40.946362 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:38:40.946372 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:38:40.946383 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:38:40.946395 | orchestrator | 2025-08-29 14:38:40.946406 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-08-29 14:38:40.946418 | orchestrator | Friday 29 August 2025 14:38:21 +0000 (0:00:00.871) 0:06:28.040 ********* 2025-08-29 14:38:40.946431 | orchestrator | ok: [testbed-manager] 2025-08-29 14:38:40.946442 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:38:40.946454 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:38:40.946466 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:38:40.946498 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:38:40.946511 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:38:40.946523 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:38:40.946534 | orchestrator | 2025-08-29 14:38:40.946547 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-08-29 14:38:40.946560 | orchestrator | Friday 29 August 2025 14:38:23 +0000 (0:00:01.699) 0:06:29.740 ********* 2025-08-29 14:38:40.946572 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:38:40.946585 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:38:40.946597 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:38:40.946609 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:38:40.946621 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:38:40.946634 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:38:40.946646 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:38:40.946658 | orchestrator | 2025-08-29 14:38:40.946671 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-08-29 14:38:40.946683 | orchestrator | Friday 29 August 2025 14:38:24 +0000 (0:00:01.421) 0:06:31.161 ********* 2025-08-29 14:38:40.946695 | orchestrator | ok: [testbed-manager] 2025-08-29 14:38:40.946707 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:38:40.946719 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:38:40.946756 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:38:40.946768 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:38:40.946779 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:38:40.946790 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:38:40.946800 | orchestrator | 2025-08-29 14:38:40.946811 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-08-29 14:38:40.946822 | orchestrator | Friday 29 August 2025 14:38:25 +0000 (0:00:01.345) 0:06:32.508 ********* 2025-08-29 14:38:40.946833 | orchestrator | changed: [testbed-manager] 2025-08-29 14:38:40.946844 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:38:40.946855 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:38:40.946865 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:38:40.946876 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:38:40.946887 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:38:40.946897 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:38:40.946908 | orchestrator | 2025-08-29 14:38:40.946919 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-08-29 14:38:40.946930 | orchestrator | Friday 29 August 2025 14:38:27 +0000 (0:00:01.463) 0:06:33.971 ********* 2025-08-29 14:38:40.946941 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:38:40.946952 | orchestrator | 2025-08-29 14:38:40.946963 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-08-29 14:38:40.946974 | orchestrator | Friday 29 August 2025 14:38:28 +0000 (0:00:01.057) 0:06:35.028 ********* 2025-08-29 14:38:40.946985 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:38:40.946996 | orchestrator | ok: [testbed-manager] 2025-08-29 14:38:40.947006 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:38:40.947017 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:38:40.947028 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:38:40.947039 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:38:40.947050 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:38:40.947061 | orchestrator | 2025-08-29 14:38:40.947072 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-08-29 14:38:40.947082 | orchestrator | Friday 29 August 2025 14:38:29 +0000 (0:00:01.396) 0:06:36.424 ********* 2025-08-29 14:38:40.947094 | orchestrator | ok: [testbed-manager] 2025-08-29 14:38:40.947104 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:38:40.947133 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:38:40.947144 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:38:40.947155 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:38:40.947165 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:38:40.947176 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:38:40.947186 | orchestrator | 2025-08-29 14:38:40.947197 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-08-29 14:38:40.947208 | orchestrator | Friday 29 August 2025 14:38:31 +0000 (0:00:01.172) 0:06:37.597 ********* 2025-08-29 14:38:40.947219 | orchestrator | ok: [testbed-manager] 2025-08-29 14:38:40.947230 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:38:40.947240 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:38:40.947251 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:38:40.947261 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:38:40.947272 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:38:40.947283 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:38:40.947293 | orchestrator | 2025-08-29 14:38:40.947304 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-08-29 14:38:40.947315 | orchestrator | Friday 29 August 2025 14:38:32 +0000 (0:00:01.156) 0:06:38.753 ********* 2025-08-29 14:38:40.947326 | orchestrator | ok: [testbed-manager] 2025-08-29 14:38:40.947336 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:38:40.947346 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:38:40.947357 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:38:40.947367 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:38:40.947378 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:38:40.947398 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:38:40.947409 | orchestrator | 2025-08-29 14:38:40.947420 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-08-29 14:38:40.947431 | orchestrator | Friday 29 August 2025 14:38:33 +0000 (0:00:01.105) 0:06:39.858 ********* 2025-08-29 14:38:40.947442 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:38:40.947453 | orchestrator | 2025-08-29 14:38:40.947464 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-08-29 14:38:40.947503 | orchestrator | Friday 29 August 2025 14:38:34 +0000 (0:00:01.091) 0:06:40.949 ********* 2025-08-29 14:38:40.947516 | orchestrator | 2025-08-29 14:38:40.947526 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-08-29 14:38:40.947537 | orchestrator | Friday 29 August 2025 14:38:34 +0000 (0:00:00.041) 0:06:40.991 ********* 2025-08-29 14:38:40.947548 | orchestrator | 2025-08-29 14:38:40.947558 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-08-29 14:38:40.947569 | orchestrator | Friday 29 August 2025 14:38:34 +0000 (0:00:00.046) 0:06:41.037 ********* 2025-08-29 14:38:40.947580 | orchestrator | 2025-08-29 14:38:40.947590 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-08-29 14:38:40.947601 | orchestrator | Friday 29 August 2025 14:38:34 +0000 (0:00:00.038) 0:06:41.075 ********* 2025-08-29 14:38:40.947611 | orchestrator | 2025-08-29 14:38:40.947622 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-08-29 14:38:40.947633 | orchestrator | Friday 29 August 2025 14:38:34 +0000 (0:00:00.038) 0:06:41.114 ********* 2025-08-29 14:38:40.947643 | orchestrator | 2025-08-29 14:38:40.947653 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-08-29 14:38:40.947664 | orchestrator | Friday 29 August 2025 14:38:34 +0000 (0:00:00.057) 0:06:41.172 ********* 2025-08-29 14:38:40.947674 | orchestrator | 2025-08-29 14:38:40.947685 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-08-29 14:38:40.947712 | orchestrator | Friday 29 August 2025 14:38:34 +0000 (0:00:00.039) 0:06:41.211 ********* 2025-08-29 14:38:40.947724 | orchestrator | 2025-08-29 14:38:40.947735 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-08-29 14:38:40.947750 | orchestrator | Friday 29 August 2025 14:38:34 +0000 (0:00:00.038) 0:06:41.250 ********* 2025-08-29 14:38:40.947761 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:38:40.947772 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:38:40.947783 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:38:40.947793 | orchestrator | 2025-08-29 14:38:40.947804 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-08-29 14:38:40.947815 | orchestrator | Friday 29 August 2025 14:38:35 +0000 (0:00:01.184) 0:06:42.435 ********* 2025-08-29 14:38:40.947826 | orchestrator | changed: [testbed-manager] 2025-08-29 14:38:40.947836 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:38:40.947847 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:38:40.947857 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:38:40.947868 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:38:40.947878 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:38:40.947889 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:38:40.947899 | orchestrator | 2025-08-29 14:38:40.947910 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-08-29 14:38:40.947920 | orchestrator | Friday 29 August 2025 14:38:37 +0000 (0:00:01.382) 0:06:43.817 ********* 2025-08-29 14:38:40.947931 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:38:40.947941 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:38:40.947952 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:38:40.947962 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:38:40.947973 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:38:40.947992 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:38:40.948003 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:38:40.948013 | orchestrator | 2025-08-29 14:38:40.948024 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-08-29 14:38:40.948035 | orchestrator | Friday 29 August 2025 14:38:39 +0000 (0:00:02.552) 0:06:46.370 ********* 2025-08-29 14:38:40.948046 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:38:40.948056 | orchestrator | 2025-08-29 14:38:40.948067 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-08-29 14:38:40.948077 | orchestrator | Friday 29 August 2025 14:38:39 +0000 (0:00:00.107) 0:06:46.477 ********* 2025-08-29 14:38:40.948088 | orchestrator | ok: [testbed-manager] 2025-08-29 14:38:40.948099 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:38:40.948109 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:38:40.948120 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:38:40.948138 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:39:06.694807 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:39:06.694910 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:39:06.694922 | orchestrator | 2025-08-29 14:39:06.694932 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-08-29 14:39:06.694941 | orchestrator | Friday 29 August 2025 14:38:40 +0000 (0:00:01.044) 0:06:47.522 ********* 2025-08-29 14:39:06.694951 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:39:06.694960 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:39:06.694968 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:39:06.694976 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:39:06.694984 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:39:06.694992 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:39:06.694999 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:39:06.695007 | orchestrator | 2025-08-29 14:39:06.695016 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-08-29 14:39:06.695037 | orchestrator | Friday 29 August 2025 14:38:41 +0000 (0:00:00.531) 0:06:48.053 ********* 2025-08-29 14:39:06.695048 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:39:06.695058 | orchestrator | 2025-08-29 14:39:06.695076 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-08-29 14:39:06.695084 | orchestrator | Friday 29 August 2025 14:38:42 +0000 (0:00:01.078) 0:06:49.132 ********* 2025-08-29 14:39:06.695092 | orchestrator | ok: [testbed-manager] 2025-08-29 14:39:06.695101 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:39:06.695110 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:39:06.695118 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:39:06.695126 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:39:06.695134 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:39:06.695142 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:39:06.695150 | orchestrator | 2025-08-29 14:39:06.695158 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-08-29 14:39:06.695166 | orchestrator | Friday 29 August 2025 14:38:43 +0000 (0:00:00.888) 0:06:50.020 ********* 2025-08-29 14:39:06.695174 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-08-29 14:39:06.695182 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-08-29 14:39:06.695190 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-08-29 14:39:06.695198 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-08-29 14:39:06.695206 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-08-29 14:39:06.695214 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-08-29 14:39:06.695222 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-08-29 14:39:06.695230 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-08-29 14:39:06.695238 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-08-29 14:39:06.695266 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-08-29 14:39:06.695274 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-08-29 14:39:06.695282 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-08-29 14:39:06.695290 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-08-29 14:39:06.695298 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-08-29 14:39:06.695306 | orchestrator | 2025-08-29 14:39:06.695314 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-08-29 14:39:06.695322 | orchestrator | Friday 29 August 2025 14:38:45 +0000 (0:00:02.506) 0:06:52.527 ********* 2025-08-29 14:39:06.695342 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:39:06.695351 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:39:06.695361 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:39:06.695369 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:39:06.695378 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:39:06.695387 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:39:06.695396 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:39:06.695405 | orchestrator | 2025-08-29 14:39:06.695414 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-08-29 14:39:06.695423 | orchestrator | Friday 29 August 2025 14:38:46 +0000 (0:00:00.506) 0:06:53.034 ********* 2025-08-29 14:39:06.695434 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:39:06.695445 | orchestrator | 2025-08-29 14:39:06.695470 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-08-29 14:39:06.695480 | orchestrator | Friday 29 August 2025 14:38:47 +0000 (0:00:01.027) 0:06:54.062 ********* 2025-08-29 14:39:06.695489 | orchestrator | ok: [testbed-manager] 2025-08-29 14:39:06.695498 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:39:06.695507 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:39:06.695516 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:39:06.695525 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:39:06.695533 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:39:06.695542 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:39:06.695551 | orchestrator | 2025-08-29 14:39:06.695560 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-08-29 14:39:06.695569 | orchestrator | Friday 29 August 2025 14:38:48 +0000 (0:00:00.846) 0:06:54.908 ********* 2025-08-29 14:39:06.695578 | orchestrator | ok: [testbed-manager] 2025-08-29 14:39:06.695587 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:39:06.695596 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:39:06.695605 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:39:06.695614 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:39:06.695623 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:39:06.695631 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:39:06.695641 | orchestrator | 2025-08-29 14:39:06.695650 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-08-29 14:39:06.695672 | orchestrator | Friday 29 August 2025 14:38:49 +0000 (0:00:00.823) 0:06:55.732 ********* 2025-08-29 14:39:06.695682 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:39:06.695691 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:39:06.695700 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:39:06.695709 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:39:06.695717 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:39:06.695725 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:39:06.695732 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:39:06.695740 | orchestrator | 2025-08-29 14:39:06.695748 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-08-29 14:39:06.695756 | orchestrator | Friday 29 August 2025 14:38:49 +0000 (0:00:00.506) 0:06:56.238 ********* 2025-08-29 14:39:06.695775 | orchestrator | ok: [testbed-manager] 2025-08-29 14:39:06.695784 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:39:06.695791 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:39:06.695799 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:39:06.695807 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:39:06.695815 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:39:06.695823 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:39:06.695830 | orchestrator | 2025-08-29 14:39:06.695838 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-08-29 14:39:06.695846 | orchestrator | Friday 29 August 2025 14:38:51 +0000 (0:00:01.656) 0:06:57.895 ********* 2025-08-29 14:39:06.695854 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:39:06.695862 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:39:06.695870 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:39:06.695878 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:39:06.695886 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:39:06.695893 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:39:06.695901 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:39:06.695909 | orchestrator | 2025-08-29 14:39:06.695917 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-08-29 14:39:06.695925 | orchestrator | Friday 29 August 2025 14:38:51 +0000 (0:00:00.511) 0:06:58.406 ********* 2025-08-29 14:39:06.695933 | orchestrator | ok: [testbed-manager] 2025-08-29 14:39:06.695941 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:39:06.695948 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:39:06.695956 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:39:06.695964 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:39:06.695972 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:39:06.695980 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:39:06.695988 | orchestrator | 2025-08-29 14:39:06.695995 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-08-29 14:39:06.696003 | orchestrator | Friday 29 August 2025 14:38:59 +0000 (0:00:07.430) 0:07:05.837 ********* 2025-08-29 14:39:06.696011 | orchestrator | ok: [testbed-manager] 2025-08-29 14:39:06.696019 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:39:06.696027 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:39:06.696035 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:39:06.696042 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:39:06.696050 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:39:06.696058 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:39:06.696066 | orchestrator | 2025-08-29 14:39:06.696074 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-08-29 14:39:06.696081 | orchestrator | Friday 29 August 2025 14:39:00 +0000 (0:00:01.392) 0:07:07.230 ********* 2025-08-29 14:39:06.696089 | orchestrator | ok: [testbed-manager] 2025-08-29 14:39:06.696097 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:39:06.696105 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:39:06.696113 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:39:06.696120 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:39:06.696128 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:39:06.696136 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:39:06.696144 | orchestrator | 2025-08-29 14:39:06.696152 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-08-29 14:39:06.696164 | orchestrator | Friday 29 August 2025 14:39:02 +0000 (0:00:01.905) 0:07:09.135 ********* 2025-08-29 14:39:06.696172 | orchestrator | ok: [testbed-manager] 2025-08-29 14:39:06.696180 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:39:06.696188 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:39:06.696196 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:39:06.696203 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:39:06.696211 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:39:06.696219 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:39:06.696226 | orchestrator | 2025-08-29 14:39:06.696234 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-08-29 14:39:06.696248 | orchestrator | Friday 29 August 2025 14:39:04 +0000 (0:00:01.727) 0:07:10.863 ********* 2025-08-29 14:39:06.696256 | orchestrator | ok: [testbed-manager] 2025-08-29 14:39:06.696264 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:39:06.696271 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:39:06.696279 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:39:06.696287 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:39:06.696295 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:39:06.696302 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:39:06.696310 | orchestrator | 2025-08-29 14:39:06.696318 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-08-29 14:39:06.696326 | orchestrator | Friday 29 August 2025 14:39:05 +0000 (0:00:00.840) 0:07:11.703 ********* 2025-08-29 14:39:06.696334 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:39:06.696342 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:39:06.696350 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:39:06.696358 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:39:06.696366 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:39:06.696373 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:39:06.696381 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:39:06.696389 | orchestrator | 2025-08-29 14:39:06.696397 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-08-29 14:39:06.696405 | orchestrator | Friday 29 August 2025 14:39:06 +0000 (0:00:01.009) 0:07:12.713 ********* 2025-08-29 14:39:06.696412 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:39:06.696420 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:39:06.696428 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:39:06.696436 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:39:06.696444 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:39:06.696504 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:39:06.696515 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:39:06.696522 | orchestrator | 2025-08-29 14:39:06.696536 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-08-29 14:39:39.870492 | orchestrator | Friday 29 August 2025 14:39:06 +0000 (0:00:00.558) 0:07:13.271 ********* 2025-08-29 14:39:39.870605 | orchestrator | ok: [testbed-manager] 2025-08-29 14:39:39.870622 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:39:39.870634 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:39:39.870645 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:39:39.870655 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:39:39.870666 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:39:39.870677 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:39:39.870688 | orchestrator | 2025-08-29 14:39:39.870700 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-08-29 14:39:39.870713 | orchestrator | Friday 29 August 2025 14:39:07 +0000 (0:00:00.543) 0:07:13.815 ********* 2025-08-29 14:39:39.870724 | orchestrator | ok: [testbed-manager] 2025-08-29 14:39:39.870735 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:39:39.870746 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:39:39.870756 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:39:39.870767 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:39:39.870777 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:39:39.870788 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:39:39.870798 | orchestrator | 2025-08-29 14:39:39.870809 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-08-29 14:39:39.870820 | orchestrator | Friday 29 August 2025 14:39:07 +0000 (0:00:00.538) 0:07:14.354 ********* 2025-08-29 14:39:39.870831 | orchestrator | ok: [testbed-manager] 2025-08-29 14:39:39.870841 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:39:39.870852 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:39:39.870862 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:39:39.870873 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:39:39.870883 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:39:39.870894 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:39:39.870905 | orchestrator | 2025-08-29 14:39:39.870915 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-08-29 14:39:39.870950 | orchestrator | Friday 29 August 2025 14:39:08 +0000 (0:00:00.563) 0:07:14.918 ********* 2025-08-29 14:39:39.870962 | orchestrator | ok: [testbed-manager] 2025-08-29 14:39:39.870974 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:39:39.870986 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:39:39.870997 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:39:39.871009 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:39:39.871021 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:39:39.871033 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:39:39.871043 | orchestrator | 2025-08-29 14:39:39.871054 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-08-29 14:39:39.871065 | orchestrator | Friday 29 August 2025 14:39:14 +0000 (0:00:05.973) 0:07:20.891 ********* 2025-08-29 14:39:39.871075 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:39:39.871086 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:39:39.871097 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:39:39.871108 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:39:39.871118 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:39:39.871129 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:39:39.871139 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:39:39.871149 | orchestrator | 2025-08-29 14:39:39.871160 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-08-29 14:39:39.871171 | orchestrator | Friday 29 August 2025 14:39:14 +0000 (0:00:00.613) 0:07:21.505 ********* 2025-08-29 14:39:39.871183 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:39:39.871196 | orchestrator | 2025-08-29 14:39:39.871207 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-08-29 14:39:39.871217 | orchestrator | Friday 29 August 2025 14:39:15 +0000 (0:00:00.809) 0:07:22.314 ********* 2025-08-29 14:39:39.871228 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:39:39.871252 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:39:39.871263 | orchestrator | ok: [testbed-manager] 2025-08-29 14:39:39.871274 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:39:39.871284 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:39:39.871295 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:39:39.871306 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:39:39.871316 | orchestrator | 2025-08-29 14:39:39.871327 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-08-29 14:39:39.871338 | orchestrator | Friday 29 August 2025 14:39:17 +0000 (0:00:01.947) 0:07:24.262 ********* 2025-08-29 14:39:39.871349 | orchestrator | ok: [testbed-manager] 2025-08-29 14:39:39.871359 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:39:39.871370 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:39:39.871380 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:39:39.871390 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:39:39.871401 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:39:39.871429 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:39:39.871442 | orchestrator | 2025-08-29 14:39:39.871452 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-08-29 14:39:39.871463 | orchestrator | Friday 29 August 2025 14:39:18 +0000 (0:00:01.119) 0:07:25.381 ********* 2025-08-29 14:39:39.871473 | orchestrator | ok: [testbed-manager] 2025-08-29 14:39:39.871484 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:39:39.871494 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:39:39.871505 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:39:39.871515 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:39:39.871526 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:39:39.871536 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:39:39.871546 | orchestrator | 2025-08-29 14:39:39.871557 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-08-29 14:39:39.871568 | orchestrator | Friday 29 August 2025 14:39:19 +0000 (0:00:00.875) 0:07:26.257 ********* 2025-08-29 14:39:39.871587 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-08-29 14:39:39.871600 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-08-29 14:39:39.871611 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-08-29 14:39:39.871639 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-08-29 14:39:39.871650 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-08-29 14:39:39.871661 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-08-29 14:39:39.871672 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-08-29 14:39:39.871683 | orchestrator | 2025-08-29 14:39:39.871694 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-08-29 14:39:39.871705 | orchestrator | Friday 29 August 2025 14:39:21 +0000 (0:00:01.724) 0:07:27.981 ********* 2025-08-29 14:39:39.871716 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:39:39.871728 | orchestrator | 2025-08-29 14:39:39.871739 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-08-29 14:39:39.871750 | orchestrator | Friday 29 August 2025 14:39:22 +0000 (0:00:01.017) 0:07:28.999 ********* 2025-08-29 14:39:39.871760 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:39:39.871771 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:39:39.871782 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:39:39.871793 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:39:39.871803 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:39:39.871814 | orchestrator | changed: [testbed-manager] 2025-08-29 14:39:39.871825 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:39:39.871835 | orchestrator | 2025-08-29 14:39:39.871846 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-08-29 14:39:39.871857 | orchestrator | Friday 29 August 2025 14:39:31 +0000 (0:00:09.265) 0:07:38.265 ********* 2025-08-29 14:39:39.871868 | orchestrator | ok: [testbed-manager] 2025-08-29 14:39:39.871878 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:39:39.871889 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:39:39.871900 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:39:39.871911 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:39:39.871921 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:39:39.871932 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:39:39.871943 | orchestrator | 2025-08-29 14:39:39.871954 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-08-29 14:39:39.871964 | orchestrator | Friday 29 August 2025 14:39:33 +0000 (0:00:01.884) 0:07:40.149 ********* 2025-08-29 14:39:39.871975 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:39:39.871986 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:39:39.871997 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:39:39.872007 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:39:39.872018 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:39:39.872028 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:39:39.872039 | orchestrator | 2025-08-29 14:39:39.872050 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-08-29 14:39:39.872061 | orchestrator | Friday 29 August 2025 14:39:34 +0000 (0:00:01.329) 0:07:41.478 ********* 2025-08-29 14:39:39.872078 | orchestrator | changed: [testbed-manager] 2025-08-29 14:39:39.872089 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:39:39.872099 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:39:39.872110 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:39:39.872126 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:39:39.872137 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:39:39.872148 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:39:39.872159 | orchestrator | 2025-08-29 14:39:39.872170 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-08-29 14:39:39.872180 | orchestrator | 2025-08-29 14:39:39.872191 | orchestrator | TASK [Include hardening role] ************************************************** 2025-08-29 14:39:39.872202 | orchestrator | Friday 29 August 2025 14:39:36 +0000 (0:00:01.284) 0:07:42.762 ********* 2025-08-29 14:39:39.872213 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:39:39.872223 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:39:39.872234 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:39:39.872244 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:39:39.872255 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:39:39.872266 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:39:39.872276 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:39:39.872287 | orchestrator | 2025-08-29 14:39:39.872298 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-08-29 14:39:39.872309 | orchestrator | 2025-08-29 14:39:39.872320 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-08-29 14:39:39.872330 | orchestrator | Friday 29 August 2025 14:39:36 +0000 (0:00:00.530) 0:07:43.293 ********* 2025-08-29 14:39:39.872341 | orchestrator | changed: [testbed-manager] 2025-08-29 14:39:39.872352 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:39:39.872362 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:39:39.872373 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:39:39.872384 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:39:39.872394 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:39:39.872405 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:39:39.872433 | orchestrator | 2025-08-29 14:39:39.872445 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-08-29 14:39:39.872455 | orchestrator | Friday 29 August 2025 14:39:38 +0000 (0:00:01.553) 0:07:44.847 ********* 2025-08-29 14:39:39.872466 | orchestrator | ok: [testbed-manager] 2025-08-29 14:39:39.872477 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:39:39.872488 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:39:39.872499 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:39:39.872509 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:39:39.872520 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:39:39.872531 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:39:39.872541 | orchestrator | 2025-08-29 14:39:39.872552 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-08-29 14:39:39.872569 | orchestrator | Friday 29 August 2025 14:39:39 +0000 (0:00:01.593) 0:07:46.440 ********* 2025-08-29 14:40:03.925650 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:40:03.925762 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:40:03.925779 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:40:03.925791 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:40:03.925802 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:40:03.925814 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:40:03.925825 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:40:03.925837 | orchestrator | 2025-08-29 14:40:03.925849 | orchestrator | TASK [Include smartd role] ***************************************************** 2025-08-29 14:40:03.925863 | orchestrator | Friday 29 August 2025 14:39:40 +0000 (0:00:00.526) 0:07:46.967 ********* 2025-08-29 14:40:03.925874 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:40:03.925887 | orchestrator | 2025-08-29 14:40:03.925898 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-08-29 14:40:03.925934 | orchestrator | Friday 29 August 2025 14:39:41 +0000 (0:00:01.037) 0:07:48.004 ********* 2025-08-29 14:40:03.925948 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:40:03.925961 | orchestrator | 2025-08-29 14:40:03.925972 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-08-29 14:40:03.925983 | orchestrator | Friday 29 August 2025 14:39:42 +0000 (0:00:00.811) 0:07:48.816 ********* 2025-08-29 14:40:03.925993 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:40:03.926004 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:40:03.926066 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:40:03.926078 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:40:03.926090 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:40:03.926100 | orchestrator | changed: [testbed-manager] 2025-08-29 14:40:03.926111 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:40:03.926122 | orchestrator | 2025-08-29 14:40:03.926133 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-08-29 14:40:03.926144 | orchestrator | Friday 29 August 2025 14:39:51 +0000 (0:00:08.879) 0:07:57.696 ********* 2025-08-29 14:40:03.926155 | orchestrator | changed: [testbed-manager] 2025-08-29 14:40:03.926166 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:40:03.926177 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:40:03.926187 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:40:03.926198 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:40:03.926209 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:40:03.926219 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:40:03.926230 | orchestrator | 2025-08-29 14:40:03.926241 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-08-29 14:40:03.926252 | orchestrator | Friday 29 August 2025 14:39:51 +0000 (0:00:00.883) 0:07:58.579 ********* 2025-08-29 14:40:03.926263 | orchestrator | changed: [testbed-manager] 2025-08-29 14:40:03.926273 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:40:03.926284 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:40:03.926295 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:40:03.926305 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:40:03.926316 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:40:03.926327 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:40:03.926338 | orchestrator | 2025-08-29 14:40:03.926348 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-08-29 14:40:03.926359 | orchestrator | Friday 29 August 2025 14:39:53 +0000 (0:00:01.545) 0:08:00.125 ********* 2025-08-29 14:40:03.926390 | orchestrator | changed: [testbed-manager] 2025-08-29 14:40:03.926402 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:40:03.926412 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:40:03.926423 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:40:03.926433 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:40:03.926444 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:40:03.926455 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:40:03.926465 | orchestrator | 2025-08-29 14:40:03.926476 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-08-29 14:40:03.926487 | orchestrator | Friday 29 August 2025 14:39:55 +0000 (0:00:01.690) 0:08:01.815 ********* 2025-08-29 14:40:03.926498 | orchestrator | changed: [testbed-manager] 2025-08-29 14:40:03.926508 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:40:03.926568 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:40:03.926581 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:40:03.926592 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:40:03.926603 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:40:03.926613 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:40:03.926624 | orchestrator | 2025-08-29 14:40:03.926635 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-08-29 14:40:03.926656 | orchestrator | Friday 29 August 2025 14:39:56 +0000 (0:00:01.166) 0:08:02.982 ********* 2025-08-29 14:40:03.926667 | orchestrator | changed: [testbed-manager] 2025-08-29 14:40:03.926678 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:40:03.926688 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:40:03.926699 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:40:03.926709 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:40:03.926720 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:40:03.926730 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:40:03.926741 | orchestrator | 2025-08-29 14:40:03.926752 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-08-29 14:40:03.926762 | orchestrator | 2025-08-29 14:40:03.926773 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-08-29 14:40:03.926784 | orchestrator | Friday 29 August 2025 14:39:57 +0000 (0:00:01.336) 0:08:04.318 ********* 2025-08-29 14:40:03.926796 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:40:03.926806 | orchestrator | 2025-08-29 14:40:03.926817 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-08-29 14:40:03.926846 | orchestrator | Friday 29 August 2025 14:39:58 +0000 (0:00:00.806) 0:08:05.124 ********* 2025-08-29 14:40:03.926857 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:40:03.926869 | orchestrator | ok: [testbed-manager] 2025-08-29 14:40:03.926880 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:40:03.926890 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:40:03.926901 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:40:03.926912 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:40:03.926923 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:40:03.926934 | orchestrator | 2025-08-29 14:40:03.926945 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-08-29 14:40:03.926956 | orchestrator | Friday 29 August 2025 14:39:59 +0000 (0:00:00.836) 0:08:05.961 ********* 2025-08-29 14:40:03.926967 | orchestrator | changed: [testbed-manager] 2025-08-29 14:40:03.926977 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:40:03.926988 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:40:03.926999 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:40:03.927010 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:40:03.927021 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:40:03.927031 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:40:03.927042 | orchestrator | 2025-08-29 14:40:03.927053 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-08-29 14:40:03.927064 | orchestrator | Friday 29 August 2025 14:40:00 +0000 (0:00:01.429) 0:08:07.391 ********* 2025-08-29 14:40:03.927075 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:40:03.927086 | orchestrator | 2025-08-29 14:40:03.927097 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-08-29 14:40:03.927107 | orchestrator | Friday 29 August 2025 14:40:01 +0000 (0:00:00.926) 0:08:08.318 ********* 2025-08-29 14:40:03.927118 | orchestrator | ok: [testbed-manager] 2025-08-29 14:40:03.927129 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:40:03.927140 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:40:03.927150 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:40:03.927161 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:40:03.927172 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:40:03.927183 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:40:03.927194 | orchestrator | 2025-08-29 14:40:03.927205 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-08-29 14:40:03.927215 | orchestrator | Friday 29 August 2025 14:40:02 +0000 (0:00:00.835) 0:08:09.154 ********* 2025-08-29 14:40:03.927226 | orchestrator | changed: [testbed-manager] 2025-08-29 14:40:03.927237 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:40:03.927254 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:40:03.927266 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:40:03.927276 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:40:03.927287 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:40:03.927298 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:40:03.927308 | orchestrator | 2025-08-29 14:40:03.927319 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:40:03.927331 | orchestrator | testbed-manager : ok=163  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-08-29 14:40:03.927342 | orchestrator | testbed-node-0 : ok=171  changed=66  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-08-29 14:40:03.927353 | orchestrator | testbed-node-1 : ok=171  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-08-29 14:40:03.927388 | orchestrator | testbed-node-2 : ok=171  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-08-29 14:40:03.927400 | orchestrator | testbed-node-3 : ok=170  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-08-29 14:40:03.927411 | orchestrator | testbed-node-4 : ok=170  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-08-29 14:40:03.927422 | orchestrator | testbed-node-5 : ok=170  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-08-29 14:40:03.927433 | orchestrator | 2025-08-29 14:40:03.927444 | orchestrator | 2025-08-29 14:40:03.927455 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:40:03.927465 | orchestrator | Friday 29 August 2025 14:40:03 +0000 (0:00:01.331) 0:08:10.486 ********* 2025-08-29 14:40:03.927476 | orchestrator | =============================================================================== 2025-08-29 14:40:03.927487 | orchestrator | osism.commons.packages : Install required packages --------------------- 74.92s 2025-08-29 14:40:03.927498 | orchestrator | osism.commons.packages : Download required packages -------------------- 38.24s 2025-08-29 14:40:03.927508 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.85s 2025-08-29 14:40:03.927519 | orchestrator | osism.commons.repository : Update package cache ------------------------ 18.19s 2025-08-29 14:40:03.927529 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 13.00s 2025-08-29 14:40:03.927540 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 12.69s 2025-08-29 14:40:03.927552 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.47s 2025-08-29 14:40:03.927563 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.49s 2025-08-29 14:40:03.927573 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.27s 2025-08-29 14:40:03.927584 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.88s 2025-08-29 14:40:03.927601 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.88s 2025-08-29 14:40:04.378488 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.69s 2025-08-29 14:40:04.378592 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.52s 2025-08-29 14:40:04.378607 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.31s 2025-08-29 14:40:04.378619 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.78s 2025-08-29 14:40:04.378630 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.43s 2025-08-29 14:40:04.378641 | orchestrator | osism.commons.services : Populate service facts ------------------------- 6.74s 2025-08-29 14:40:04.378652 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.70s 2025-08-29 14:40:04.378690 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.54s 2025-08-29 14:40:04.378702 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.97s 2025-08-29 14:40:04.703850 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-08-29 14:40:04.703939 | orchestrator | + osism apply network 2025-08-29 14:40:17.617848 | orchestrator | 2025-08-29 14:40:17 | INFO  | Task 750adf5c-efbd-450d-aa5d-25525497b428 (network) was prepared for execution. 2025-08-29 14:40:17.619106 | orchestrator | 2025-08-29 14:40:17 | INFO  | It takes a moment until task 750adf5c-efbd-450d-aa5d-25525497b428 (network) has been started and output is visible here. 2025-08-29 14:40:46.281229 | orchestrator | 2025-08-29 14:40:46.281360 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-08-29 14:40:46.281378 | orchestrator | 2025-08-29 14:40:46.281390 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-08-29 14:40:46.281403 | orchestrator | Friday 29 August 2025 14:40:22 +0000 (0:00:00.279) 0:00:00.279 ********* 2025-08-29 14:40:46.281415 | orchestrator | ok: [testbed-manager] 2025-08-29 14:40:46.281428 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:40:46.281439 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:40:46.281450 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:40:46.281462 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:40:46.281473 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:40:46.281485 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:40:46.281497 | orchestrator | 2025-08-29 14:40:46.281508 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-08-29 14:40:46.281520 | orchestrator | Friday 29 August 2025 14:40:22 +0000 (0:00:00.741) 0:00:01.020 ********* 2025-08-29 14:40:46.281533 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:40:46.281547 | orchestrator | 2025-08-29 14:40:46.281559 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-08-29 14:40:46.281570 | orchestrator | Friday 29 August 2025 14:40:23 +0000 (0:00:01.220) 0:00:02.241 ********* 2025-08-29 14:40:46.281582 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:40:46.281593 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:40:46.281605 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:40:46.281616 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:40:46.281627 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:40:46.281638 | orchestrator | ok: [testbed-manager] 2025-08-29 14:40:46.281650 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:40:46.281661 | orchestrator | 2025-08-29 14:40:46.281673 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-08-29 14:40:46.281684 | orchestrator | Friday 29 August 2025 14:40:25 +0000 (0:00:01.711) 0:00:03.953 ********* 2025-08-29 14:40:46.281696 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:40:46.281708 | orchestrator | ok: [testbed-manager] 2025-08-29 14:40:46.281719 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:40:46.281730 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:40:46.281742 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:40:46.281754 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:40:46.281766 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:40:46.281778 | orchestrator | 2025-08-29 14:40:46.281791 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-08-29 14:40:46.281803 | orchestrator | Friday 29 August 2025 14:40:27 +0000 (0:00:01.580) 0:00:05.533 ********* 2025-08-29 14:40:46.281816 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-08-29 14:40:46.281829 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-08-29 14:40:46.281841 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-08-29 14:40:46.281854 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-08-29 14:40:46.281867 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-08-29 14:40:46.281903 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-08-29 14:40:46.281916 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-08-29 14:40:46.281928 | orchestrator | 2025-08-29 14:40:46.281940 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-08-29 14:40:46.281952 | orchestrator | Friday 29 August 2025 14:40:28 +0000 (0:00:00.958) 0:00:06.491 ********* 2025-08-29 14:40:46.281964 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 14:40:46.281976 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 14:40:46.281988 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-08-29 14:40:46.282000 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-08-29 14:40:46.282011 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-08-29 14:40:46.282127 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-08-29 14:40:46.282152 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-08-29 14:40:46.282164 | orchestrator | 2025-08-29 14:40:46.282175 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-08-29 14:40:46.282186 | orchestrator | Friday 29 August 2025 14:40:31 +0000 (0:00:03.470) 0:00:09.962 ********* 2025-08-29 14:40:46.282208 | orchestrator | changed: [testbed-manager] 2025-08-29 14:40:46.282219 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:40:46.282230 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:40:46.282241 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:40:46.282252 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:40:46.282263 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:40:46.282274 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:40:46.282284 | orchestrator | 2025-08-29 14:40:46.282325 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-08-29 14:40:46.282337 | orchestrator | Friday 29 August 2025 14:40:33 +0000 (0:00:01.476) 0:00:11.439 ********* 2025-08-29 14:40:46.282348 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 14:40:46.282359 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 14:40:46.282370 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-08-29 14:40:46.282380 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-08-29 14:40:46.282391 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-08-29 14:40:46.282402 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-08-29 14:40:46.282413 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-08-29 14:40:46.282423 | orchestrator | 2025-08-29 14:40:46.282434 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-08-29 14:40:46.282445 | orchestrator | Friday 29 August 2025 14:40:35 +0000 (0:00:02.029) 0:00:13.469 ********* 2025-08-29 14:40:46.282456 | orchestrator | ok: [testbed-manager] 2025-08-29 14:40:46.282467 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:40:46.282477 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:40:46.282488 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:40:46.282499 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:40:46.282509 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:40:46.282520 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:40:46.282531 | orchestrator | 2025-08-29 14:40:46.282542 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-08-29 14:40:46.282569 | orchestrator | Friday 29 August 2025 14:40:36 +0000 (0:00:01.126) 0:00:14.596 ********* 2025-08-29 14:40:46.282580 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:40:46.282591 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:40:46.282602 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:40:46.282612 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:40:46.282623 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:40:46.282634 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:40:46.282644 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:40:46.282655 | orchestrator | 2025-08-29 14:40:46.282666 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-08-29 14:40:46.282677 | orchestrator | Friday 29 August 2025 14:40:37 +0000 (0:00:00.708) 0:00:15.304 ********* 2025-08-29 14:40:46.282700 | orchestrator | ok: [testbed-manager] 2025-08-29 14:40:46.282711 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:40:46.282722 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:40:46.282732 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:40:46.282743 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:40:46.282754 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:40:46.282764 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:40:46.282775 | orchestrator | 2025-08-29 14:40:46.282786 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-08-29 14:40:46.282796 | orchestrator | Friday 29 August 2025 14:40:39 +0000 (0:00:02.131) 0:00:17.435 ********* 2025-08-29 14:40:46.282807 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:40:46.282818 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:40:46.282829 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:40:46.282840 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:40:46.282850 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:40:46.282861 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:40:46.282872 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-08-29 14:40:46.282884 | orchestrator | 2025-08-29 14:40:46.282895 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-08-29 14:40:46.282921 | orchestrator | Friday 29 August 2025 14:40:40 +0000 (0:00:01.045) 0:00:18.481 ********* 2025-08-29 14:40:46.282932 | orchestrator | ok: [testbed-manager] 2025-08-29 14:40:46.282943 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:40:46.282954 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:40:46.282964 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:40:46.282975 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:40:46.282985 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:40:46.282996 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:40:46.283007 | orchestrator | 2025-08-29 14:40:46.283017 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-08-29 14:40:46.283028 | orchestrator | Friday 29 August 2025 14:40:41 +0000 (0:00:01.660) 0:00:20.142 ********* 2025-08-29 14:40:46.283039 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:40:46.283052 | orchestrator | 2025-08-29 14:40:46.283063 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-08-29 14:40:46.283074 | orchestrator | Friday 29 August 2025 14:40:43 +0000 (0:00:01.323) 0:00:21.465 ********* 2025-08-29 14:40:46.283084 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:40:46.283095 | orchestrator | ok: [testbed-manager] 2025-08-29 14:40:46.283106 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:40:46.283116 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:40:46.283127 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:40:46.283138 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:40:46.283148 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:40:46.283159 | orchestrator | 2025-08-29 14:40:46.283170 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-08-29 14:40:46.283181 | orchestrator | Friday 29 August 2025 14:40:44 +0000 (0:00:00.970) 0:00:22.436 ********* 2025-08-29 14:40:46.283191 | orchestrator | ok: [testbed-manager] 2025-08-29 14:40:46.283202 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:40:46.283213 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:40:46.283223 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:40:46.283234 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:40:46.283244 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:40:46.283255 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:40:46.283266 | orchestrator | 2025-08-29 14:40:46.283277 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-08-29 14:40:46.283287 | orchestrator | Friday 29 August 2025 14:40:45 +0000 (0:00:00.846) 0:00:23.282 ********* 2025-08-29 14:40:46.283325 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-08-29 14:40:46.283336 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-08-29 14:40:46.283347 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-08-29 14:40:46.283358 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-08-29 14:40:46.283368 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-08-29 14:40:46.283379 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-08-29 14:40:46.283390 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-08-29 14:40:46.283400 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-08-29 14:40:46.283411 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-08-29 14:40:46.283422 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-08-29 14:40:46.283432 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-08-29 14:40:46.283443 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-08-29 14:40:46.283453 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-08-29 14:40:46.283464 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-08-29 14:40:46.283475 | orchestrator | 2025-08-29 14:40:46.283493 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-08-29 14:41:03.610185 | orchestrator | Friday 29 August 2025 14:40:46 +0000 (0:00:01.243) 0:00:24.526 ********* 2025-08-29 14:41:03.610397 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:41:03.610419 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:41:03.610431 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:41:03.610442 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:41:03.610454 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:41:03.610464 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:41:03.610475 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:41:03.610487 | orchestrator | 2025-08-29 14:41:03.610499 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-08-29 14:41:03.610511 | orchestrator | Friday 29 August 2025 14:40:46 +0000 (0:00:00.645) 0:00:25.172 ********* 2025-08-29 14:41:03.610524 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-1, testbed-manager, testbed-node-0, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:41:03.610538 | orchestrator | 2025-08-29 14:41:03.610550 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-08-29 14:41:03.610561 | orchestrator | Friday 29 August 2025 14:40:51 +0000 (0:00:04.927) 0:00:30.099 ********* 2025-08-29 14:41:03.610574 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-08-29 14:41:03.610605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-08-29 14:41:03.610618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-08-29 14:41:03.610629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-08-29 14:41:03.610667 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-08-29 14:41:03.610681 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-08-29 14:41:03.610700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-08-29 14:41:03.610714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-08-29 14:41:03.610727 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-08-29 14:41:03.610738 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-08-29 14:41:03.610749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-08-29 14:41:03.610781 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-08-29 14:41:03.610793 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-08-29 14:41:03.610804 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-08-29 14:41:03.610816 | orchestrator | 2025-08-29 14:41:03.610827 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-08-29 14:41:03.610838 | orchestrator | Friday 29 August 2025 14:40:57 +0000 (0:00:05.932) 0:00:36.032 ********* 2025-08-29 14:41:03.610849 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-08-29 14:41:03.610861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-08-29 14:41:03.610873 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-08-29 14:41:03.610893 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-08-29 14:41:03.610904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-08-29 14:41:03.610915 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-08-29 14:41:03.610926 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-08-29 14:41:03.610938 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-08-29 14:41:03.610949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-08-29 14:41:03.610960 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-08-29 14:41:03.610971 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-08-29 14:41:03.610982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-08-29 14:41:03.611004 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-08-29 14:41:10.285541 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-08-29 14:41:10.285652 | orchestrator | 2025-08-29 14:41:10.285668 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-08-29 14:41:10.285681 | orchestrator | Friday 29 August 2025 14:41:03 +0000 (0:00:05.817) 0:00:41.849 ********* 2025-08-29 14:41:10.285694 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:41:10.285706 | orchestrator | 2025-08-29 14:41:10.285717 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-08-29 14:41:10.285748 | orchestrator | Friday 29 August 2025 14:41:04 +0000 (0:00:01.402) 0:00:43.252 ********* 2025-08-29 14:41:10.285780 | orchestrator | ok: [testbed-manager] 2025-08-29 14:41:10.285793 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:41:10.285803 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:41:10.285814 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:41:10.285825 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:41:10.285835 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:41:10.285846 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:41:10.285857 | orchestrator | 2025-08-29 14:41:10.285873 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-08-29 14:41:10.285885 | orchestrator | Friday 29 August 2025 14:41:06 +0000 (0:00:01.185) 0:00:44.437 ********* 2025-08-29 14:41:10.285896 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-08-29 14:41:10.285908 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-08-29 14:41:10.285919 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-08-29 14:41:10.285929 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-08-29 14:41:10.285940 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-08-29 14:41:10.285951 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-08-29 14:41:10.285962 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-08-29 14:41:10.285972 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:41:10.285983 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-08-29 14:41:10.285994 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-08-29 14:41:10.286005 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-08-29 14:41:10.286064 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-08-29 14:41:10.286079 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-08-29 14:41:10.286091 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:41:10.286103 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-08-29 14:41:10.286115 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-08-29 14:41:10.286127 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-08-29 14:41:10.286139 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:41:10.286151 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-08-29 14:41:10.286162 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-08-29 14:41:10.286174 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-08-29 14:41:10.286186 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-08-29 14:41:10.286198 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-08-29 14:41:10.286208 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:41:10.286219 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-08-29 14:41:10.286230 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-08-29 14:41:10.286240 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-08-29 14:41:10.286251 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-08-29 14:41:10.286282 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:41:10.286293 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:41:10.286303 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-08-29 14:41:10.286315 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-08-29 14:41:10.286334 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-08-29 14:41:10.286345 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-08-29 14:41:10.286355 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:41:10.286366 | orchestrator | 2025-08-29 14:41:10.286377 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-08-29 14:41:10.286405 | orchestrator | Friday 29 August 2025 14:41:08 +0000 (0:00:02.163) 0:00:46.601 ********* 2025-08-29 14:41:10.286416 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:41:10.286427 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:41:10.286437 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:41:10.286448 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:41:10.286459 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:41:10.286470 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:41:10.286480 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:41:10.286491 | orchestrator | 2025-08-29 14:41:10.286501 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-08-29 14:41:10.286512 | orchestrator | Friday 29 August 2025 14:41:09 +0000 (0:00:00.738) 0:00:47.339 ********* 2025-08-29 14:41:10.286523 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:41:10.286534 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:41:10.286545 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:41:10.286555 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:41:10.286566 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:41:10.286576 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:41:10.286587 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:41:10.286598 | orchestrator | 2025-08-29 14:41:10.286609 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:41:10.286620 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 14:41:10.286638 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 14:41:10.286650 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 14:41:10.286661 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 14:41:10.286672 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 14:41:10.286683 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 14:41:10.286694 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 14:41:10.286705 | orchestrator | 2025-08-29 14:41:10.286716 | orchestrator | 2025-08-29 14:41:10.286726 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:41:10.286737 | orchestrator | Friday 29 August 2025 14:41:09 +0000 (0:00:00.774) 0:00:48.114 ********* 2025-08-29 14:41:10.286748 | orchestrator | =============================================================================== 2025-08-29 14:41:10.286759 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.93s 2025-08-29 14:41:10.286770 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.82s 2025-08-29 14:41:10.286781 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.93s 2025-08-29 14:41:10.286792 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.47s 2025-08-29 14:41:10.286809 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.16s 2025-08-29 14:41:10.286820 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.13s 2025-08-29 14:41:10.286830 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 2.03s 2025-08-29 14:41:10.286841 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.71s 2025-08-29 14:41:10.286852 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.66s 2025-08-29 14:41:10.286862 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.58s 2025-08-29 14:41:10.286873 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.48s 2025-08-29 14:41:10.286884 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.40s 2025-08-29 14:41:10.286894 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.32s 2025-08-29 14:41:10.286905 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.24s 2025-08-29 14:41:10.286916 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.22s 2025-08-29 14:41:10.286927 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.19s 2025-08-29 14:41:10.286937 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.13s 2025-08-29 14:41:10.286948 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 1.05s 2025-08-29 14:41:10.286959 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.97s 2025-08-29 14:41:10.286970 | orchestrator | osism.commons.network : Create required directories --------------------- 0.96s 2025-08-29 14:41:10.638654 | orchestrator | + osism apply wireguard 2025-08-29 14:41:22.870831 | orchestrator | 2025-08-29 14:41:22 | INFO  | Task c8441797-4b36-49c0-b2a3-a28931ef9fde (wireguard) was prepared for execution. 2025-08-29 14:41:22.870946 | orchestrator | 2025-08-29 14:41:22 | INFO  | It takes a moment until task c8441797-4b36-49c0-b2a3-a28931ef9fde (wireguard) has been started and output is visible here. 2025-08-29 14:41:44.754555 | orchestrator | 2025-08-29 14:41:44.754653 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-08-29 14:41:44.754668 | orchestrator | 2025-08-29 14:41:44.754679 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-08-29 14:41:44.754689 | orchestrator | Friday 29 August 2025 14:41:27 +0000 (0:00:00.227) 0:00:00.227 ********* 2025-08-29 14:41:44.754699 | orchestrator | ok: [testbed-manager] 2025-08-29 14:41:44.754709 | orchestrator | 2025-08-29 14:41:44.754719 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-08-29 14:41:44.754729 | orchestrator | Friday 29 August 2025 14:41:28 +0000 (0:00:01.661) 0:00:01.888 ********* 2025-08-29 14:41:44.754738 | orchestrator | changed: [testbed-manager] 2025-08-29 14:41:44.754748 | orchestrator | 2025-08-29 14:41:44.754759 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-08-29 14:41:44.754777 | orchestrator | Friday 29 August 2025 14:41:35 +0000 (0:00:07.017) 0:00:08.905 ********* 2025-08-29 14:41:44.754794 | orchestrator | changed: [testbed-manager] 2025-08-29 14:41:44.754810 | orchestrator | 2025-08-29 14:41:44.754827 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-08-29 14:41:44.754845 | orchestrator | Friday 29 August 2025 14:41:36 +0000 (0:00:00.604) 0:00:09.510 ********* 2025-08-29 14:41:44.754862 | orchestrator | changed: [testbed-manager] 2025-08-29 14:41:44.754879 | orchestrator | 2025-08-29 14:41:44.754891 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-08-29 14:41:44.754900 | orchestrator | Friday 29 August 2025 14:41:36 +0000 (0:00:00.454) 0:00:09.965 ********* 2025-08-29 14:41:44.754910 | orchestrator | ok: [testbed-manager] 2025-08-29 14:41:44.754920 | orchestrator | 2025-08-29 14:41:44.754944 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-08-29 14:41:44.754955 | orchestrator | Friday 29 August 2025 14:41:37 +0000 (0:00:00.553) 0:00:10.518 ********* 2025-08-29 14:41:44.754984 | orchestrator | ok: [testbed-manager] 2025-08-29 14:41:44.754994 | orchestrator | 2025-08-29 14:41:44.755004 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-08-29 14:41:44.755013 | orchestrator | Friday 29 August 2025 14:41:37 +0000 (0:00:00.557) 0:00:11.076 ********* 2025-08-29 14:41:44.755023 | orchestrator | ok: [testbed-manager] 2025-08-29 14:41:44.755033 | orchestrator | 2025-08-29 14:41:44.755042 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-08-29 14:41:44.755057 | orchestrator | Friday 29 August 2025 14:41:38 +0000 (0:00:00.419) 0:00:11.495 ********* 2025-08-29 14:41:44.755072 | orchestrator | changed: [testbed-manager] 2025-08-29 14:41:44.755089 | orchestrator | 2025-08-29 14:41:44.755106 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-08-29 14:41:44.755123 | orchestrator | Friday 29 August 2025 14:41:39 +0000 (0:00:01.271) 0:00:12.767 ********* 2025-08-29 14:41:44.755141 | orchestrator | changed: [testbed-manager] => (item=None) 2025-08-29 14:41:44.755159 | orchestrator | changed: [testbed-manager] 2025-08-29 14:41:44.755173 | orchestrator | 2025-08-29 14:41:44.755184 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-08-29 14:41:44.755223 | orchestrator | Friday 29 August 2025 14:41:40 +0000 (0:00:01.006) 0:00:13.773 ********* 2025-08-29 14:41:44.755242 | orchestrator | changed: [testbed-manager] 2025-08-29 14:41:44.755257 | orchestrator | 2025-08-29 14:41:44.755273 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-08-29 14:41:44.755289 | orchestrator | Friday 29 August 2025 14:41:43 +0000 (0:00:02.791) 0:00:16.565 ********* 2025-08-29 14:41:44.755305 | orchestrator | changed: [testbed-manager] 2025-08-29 14:41:44.755320 | orchestrator | 2025-08-29 14:41:44.755337 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:41:44.755353 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:41:44.755370 | orchestrator | 2025-08-29 14:41:44.755388 | orchestrator | 2025-08-29 14:41:44.755403 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:41:44.755420 | orchestrator | Friday 29 August 2025 14:41:44 +0000 (0:00:01.009) 0:00:17.574 ********* 2025-08-29 14:41:44.755437 | orchestrator | =============================================================================== 2025-08-29 14:41:44.755455 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 7.02s 2025-08-29 14:41:44.755472 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 2.79s 2025-08-29 14:41:44.755489 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.66s 2025-08-29 14:41:44.755507 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.27s 2025-08-29 14:41:44.755524 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 1.01s 2025-08-29 14:41:44.755538 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 1.01s 2025-08-29 14:41:44.755548 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.60s 2025-08-29 14:41:44.755557 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.56s 2025-08-29 14:41:44.755567 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.55s 2025-08-29 14:41:44.755576 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.45s 2025-08-29 14:41:44.755586 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.42s 2025-08-29 14:41:45.070139 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-08-29 14:41:45.111011 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-08-29 14:41:45.111084 | orchestrator | Dload Upload Total Spent Left Speed 2025-08-29 14:41:45.188162 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 195 0 --:--:-- --:--:-- --:--:-- 197 2025-08-29 14:41:45.203226 | orchestrator | + osism apply --environment custom workarounds 2025-08-29 14:41:47.153979 | orchestrator | 2025-08-29 14:41:47 | INFO  | Trying to run play workarounds in environment custom 2025-08-29 14:41:57.249828 | orchestrator | 2025-08-29 14:41:57 | INFO  | Task c55842f6-a419-4b0e-96b3-23937c33d7c4 (workarounds) was prepared for execution. 2025-08-29 14:41:57.249936 | orchestrator | 2025-08-29 14:41:57 | INFO  | It takes a moment until task c55842f6-a419-4b0e-96b3-23937c33d7c4 (workarounds) has been started and output is visible here. 2025-08-29 14:42:22.798914 | orchestrator | 2025-08-29 14:42:22.799009 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 14:42:22.799027 | orchestrator | 2025-08-29 14:42:22.799039 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-08-29 14:42:22.799050 | orchestrator | Friday 29 August 2025 14:42:01 +0000 (0:00:00.152) 0:00:00.152 ********* 2025-08-29 14:42:22.799061 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-08-29 14:42:22.799072 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-08-29 14:42:22.799083 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-08-29 14:42:22.799094 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-08-29 14:42:22.799105 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-08-29 14:42:22.799129 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-08-29 14:42:22.799176 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-08-29 14:42:22.799188 | orchestrator | 2025-08-29 14:42:22.799199 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-08-29 14:42:22.799209 | orchestrator | 2025-08-29 14:42:22.799220 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-08-29 14:42:22.799231 | orchestrator | Friday 29 August 2025 14:42:02 +0000 (0:00:00.791) 0:00:00.943 ********* 2025-08-29 14:42:22.799242 | orchestrator | ok: [testbed-manager] 2025-08-29 14:42:22.799254 | orchestrator | 2025-08-29 14:42:22.799264 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-08-29 14:42:22.799275 | orchestrator | 2025-08-29 14:42:22.799286 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-08-29 14:42:22.799297 | orchestrator | Friday 29 August 2025 14:42:04 +0000 (0:00:02.370) 0:00:03.314 ********* 2025-08-29 14:42:22.799307 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:42:22.799318 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:42:22.799329 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:42:22.799340 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:42:22.799350 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:42:22.799361 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:42:22.799372 | orchestrator | 2025-08-29 14:42:22.799382 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-08-29 14:42:22.799393 | orchestrator | 2025-08-29 14:42:22.799404 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-08-29 14:42:22.799415 | orchestrator | Friday 29 August 2025 14:42:06 +0000 (0:00:01.755) 0:00:05.069 ********* 2025-08-29 14:42:22.799426 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-08-29 14:42:22.799438 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-08-29 14:42:22.799449 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-08-29 14:42:22.799459 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-08-29 14:42:22.799470 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-08-29 14:42:22.799502 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-08-29 14:42:22.799515 | orchestrator | 2025-08-29 14:42:22.799528 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-08-29 14:42:22.799539 | orchestrator | Friday 29 August 2025 14:42:07 +0000 (0:00:01.512) 0:00:06.582 ********* 2025-08-29 14:42:22.799551 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:42:22.799564 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:42:22.799575 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:42:22.799587 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:42:22.799598 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:42:22.799610 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:42:22.799620 | orchestrator | 2025-08-29 14:42:22.799631 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-08-29 14:42:22.799641 | orchestrator | Friday 29 August 2025 14:42:11 +0000 (0:00:03.834) 0:00:10.417 ********* 2025-08-29 14:42:22.799652 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:42:22.799662 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:42:22.799673 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:42:22.799683 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:42:22.799694 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:42:22.799704 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:42:22.799715 | orchestrator | 2025-08-29 14:42:22.799726 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-08-29 14:42:22.799736 | orchestrator | 2025-08-29 14:42:22.799747 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-08-29 14:42:22.799757 | orchestrator | Friday 29 August 2025 14:42:12 +0000 (0:00:00.774) 0:00:11.191 ********* 2025-08-29 14:42:22.799768 | orchestrator | changed: [testbed-manager] 2025-08-29 14:42:22.799779 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:42:22.799789 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:42:22.799799 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:42:22.799810 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:42:22.799820 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:42:22.799831 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:42:22.799841 | orchestrator | 2025-08-29 14:42:22.799852 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-08-29 14:42:22.799862 | orchestrator | Friday 29 August 2025 14:42:13 +0000 (0:00:01.693) 0:00:12.885 ********* 2025-08-29 14:42:22.799873 | orchestrator | changed: [testbed-manager] 2025-08-29 14:42:22.799883 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:42:22.799893 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:42:22.799904 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:42:22.799915 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:42:22.799925 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:42:22.799951 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:42:22.799962 | orchestrator | 2025-08-29 14:42:22.799973 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-08-29 14:42:22.799984 | orchestrator | Friday 29 August 2025 14:42:15 +0000 (0:00:01.685) 0:00:14.571 ********* 2025-08-29 14:42:22.799995 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:42:22.800006 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:42:22.800016 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:42:22.800027 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:42:22.800038 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:42:22.800049 | orchestrator | ok: [testbed-manager] 2025-08-29 14:42:22.800060 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:42:22.800071 | orchestrator | 2025-08-29 14:42:22.800081 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-08-29 14:42:22.800092 | orchestrator | Friday 29 August 2025 14:42:17 +0000 (0:00:01.498) 0:00:16.069 ********* 2025-08-29 14:42:22.800103 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:42:22.800114 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:42:22.800152 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:42:22.800166 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:42:22.800177 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:42:22.800188 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:42:22.800198 | orchestrator | changed: [testbed-manager] 2025-08-29 14:42:22.800209 | orchestrator | 2025-08-29 14:42:22.800223 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-08-29 14:42:22.800242 | orchestrator | Friday 29 August 2025 14:42:19 +0000 (0:00:02.082) 0:00:18.151 ********* 2025-08-29 14:42:22.800261 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:42:22.800278 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:42:22.800295 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:42:22.800312 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:42:22.800329 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:42:22.800346 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:42:22.800362 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:42:22.800380 | orchestrator | 2025-08-29 14:42:22.800397 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-08-29 14:42:22.800414 | orchestrator | 2025-08-29 14:42:22.800434 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-08-29 14:42:22.800453 | orchestrator | Friday 29 August 2025 14:42:19 +0000 (0:00:00.632) 0:00:18.784 ********* 2025-08-29 14:42:22.800471 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:42:22.800490 | orchestrator | ok: [testbed-manager] 2025-08-29 14:42:22.800502 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:42:22.800512 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:42:22.800523 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:42:22.800533 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:42:22.800544 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:42:22.800554 | orchestrator | 2025-08-29 14:42:22.800565 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:42:22.800577 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 14:42:22.800589 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:42:22.800600 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:42:22.800610 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:42:22.800621 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:42:22.800632 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:42:22.800642 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:42:22.800653 | orchestrator | 2025-08-29 14:42:22.800664 | orchestrator | 2025-08-29 14:42:22.800674 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:42:22.800685 | orchestrator | Friday 29 August 2025 14:42:22 +0000 (0:00:02.895) 0:00:21.680 ********* 2025-08-29 14:42:22.800695 | orchestrator | =============================================================================== 2025-08-29 14:42:22.800706 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.83s 2025-08-29 14:42:22.800717 | orchestrator | Install python3-docker -------------------------------------------------- 2.90s 2025-08-29 14:42:22.800727 | orchestrator | Apply netplan configuration --------------------------------------------- 2.37s 2025-08-29 14:42:22.800738 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 2.08s 2025-08-29 14:42:22.800758 | orchestrator | Apply netplan configuration --------------------------------------------- 1.76s 2025-08-29 14:42:22.800768 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.69s 2025-08-29 14:42:22.800779 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.69s 2025-08-29 14:42:22.800789 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.51s 2025-08-29 14:42:22.800800 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.50s 2025-08-29 14:42:22.800810 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.79s 2025-08-29 14:42:22.800821 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.77s 2025-08-29 14:42:22.800841 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.63s 2025-08-29 14:42:23.464357 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-08-29 14:42:35.491163 | orchestrator | 2025-08-29 14:42:35 | INFO  | Task b889fdaa-d2f1-4129-a122-7fb9c8c28d3f (reboot) was prepared for execution. 2025-08-29 14:42:35.491272 | orchestrator | 2025-08-29 14:42:35 | INFO  | It takes a moment until task b889fdaa-d2f1-4129-a122-7fb9c8c28d3f (reboot) has been started and output is visible here. 2025-08-29 14:42:45.676946 | orchestrator | 2025-08-29 14:42:45.677052 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-08-29 14:42:45.677067 | orchestrator | 2025-08-29 14:42:45.677077 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-08-29 14:42:45.677102 | orchestrator | Friday 29 August 2025 14:42:39 +0000 (0:00:00.217) 0:00:00.217 ********* 2025-08-29 14:42:45.677161 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:42:45.677171 | orchestrator | 2025-08-29 14:42:45.677180 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-08-29 14:42:45.677189 | orchestrator | Friday 29 August 2025 14:42:39 +0000 (0:00:00.101) 0:00:00.318 ********* 2025-08-29 14:42:45.677198 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:42:45.677207 | orchestrator | 2025-08-29 14:42:45.677216 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-08-29 14:42:45.677225 | orchestrator | Friday 29 August 2025 14:42:40 +0000 (0:00:00.959) 0:00:01.278 ********* 2025-08-29 14:42:45.677234 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:42:45.677243 | orchestrator | 2025-08-29 14:42:45.677251 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-08-29 14:42:45.677260 | orchestrator | 2025-08-29 14:42:45.677269 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-08-29 14:42:45.677278 | orchestrator | Friday 29 August 2025 14:42:40 +0000 (0:00:00.115) 0:00:01.393 ********* 2025-08-29 14:42:45.677299 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:42:45.677308 | orchestrator | 2025-08-29 14:42:45.677317 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-08-29 14:42:45.677326 | orchestrator | Friday 29 August 2025 14:42:40 +0000 (0:00:00.089) 0:00:01.483 ********* 2025-08-29 14:42:45.677334 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:42:45.677352 | orchestrator | 2025-08-29 14:42:45.677361 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-08-29 14:42:45.677370 | orchestrator | Friday 29 August 2025 14:42:41 +0000 (0:00:00.676) 0:00:02.160 ********* 2025-08-29 14:42:45.677379 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:42:45.677388 | orchestrator | 2025-08-29 14:42:45.677396 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-08-29 14:42:45.677405 | orchestrator | 2025-08-29 14:42:45.677414 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-08-29 14:42:45.677423 | orchestrator | Friday 29 August 2025 14:42:41 +0000 (0:00:00.112) 0:00:02.272 ********* 2025-08-29 14:42:45.677432 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:42:45.677440 | orchestrator | 2025-08-29 14:42:45.677449 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-08-29 14:42:45.677478 | orchestrator | Friday 29 August 2025 14:42:41 +0000 (0:00:00.209) 0:00:02.482 ********* 2025-08-29 14:42:45.677489 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:42:45.677498 | orchestrator | 2025-08-29 14:42:45.677508 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-08-29 14:42:45.677518 | orchestrator | Friday 29 August 2025 14:42:42 +0000 (0:00:00.692) 0:00:03.174 ********* 2025-08-29 14:42:45.677528 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:42:45.677537 | orchestrator | 2025-08-29 14:42:45.677547 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-08-29 14:42:45.677556 | orchestrator | 2025-08-29 14:42:45.677566 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-08-29 14:42:45.677576 | orchestrator | Friday 29 August 2025 14:42:42 +0000 (0:00:00.153) 0:00:03.327 ********* 2025-08-29 14:42:45.677586 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:42:45.677596 | orchestrator | 2025-08-29 14:42:45.677606 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-08-29 14:42:45.677616 | orchestrator | Friday 29 August 2025 14:42:42 +0000 (0:00:00.113) 0:00:03.441 ********* 2025-08-29 14:42:45.677626 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:42:45.677636 | orchestrator | 2025-08-29 14:42:45.677645 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-08-29 14:42:45.677655 | orchestrator | Friday 29 August 2025 14:42:43 +0000 (0:00:00.695) 0:00:04.136 ********* 2025-08-29 14:42:45.677664 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:42:45.677674 | orchestrator | 2025-08-29 14:42:45.677683 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-08-29 14:42:45.677693 | orchestrator | 2025-08-29 14:42:45.677703 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-08-29 14:42:45.677712 | orchestrator | Friday 29 August 2025 14:42:43 +0000 (0:00:00.119) 0:00:04.255 ********* 2025-08-29 14:42:45.677722 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:42:45.677732 | orchestrator | 2025-08-29 14:42:45.677741 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-08-29 14:42:45.677750 | orchestrator | Friday 29 August 2025 14:42:43 +0000 (0:00:00.117) 0:00:04.373 ********* 2025-08-29 14:42:45.677759 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:42:45.677768 | orchestrator | 2025-08-29 14:42:45.677776 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-08-29 14:42:45.677785 | orchestrator | Friday 29 August 2025 14:42:44 +0000 (0:00:00.668) 0:00:05.041 ********* 2025-08-29 14:42:45.677794 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:42:45.677803 | orchestrator | 2025-08-29 14:42:45.677811 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-08-29 14:42:45.677820 | orchestrator | 2025-08-29 14:42:45.677829 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-08-29 14:42:45.677838 | orchestrator | Friday 29 August 2025 14:42:44 +0000 (0:00:00.109) 0:00:05.151 ********* 2025-08-29 14:42:45.677847 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:42:45.677855 | orchestrator | 2025-08-29 14:42:45.677864 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-08-29 14:42:45.677873 | orchestrator | Friday 29 August 2025 14:42:44 +0000 (0:00:00.113) 0:00:05.265 ********* 2025-08-29 14:42:45.677882 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:42:45.677891 | orchestrator | 2025-08-29 14:42:45.677900 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-08-29 14:42:45.677908 | orchestrator | Friday 29 August 2025 14:42:45 +0000 (0:00:00.697) 0:00:05.963 ********* 2025-08-29 14:42:45.677933 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:42:45.677942 | orchestrator | 2025-08-29 14:42:45.677951 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:42:45.677961 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:42:45.677977 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:42:45.677986 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:42:45.677995 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:42:45.678004 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:42:45.678012 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:42:45.678069 | orchestrator | 2025-08-29 14:42:45.678079 | orchestrator | 2025-08-29 14:42:45.678088 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:42:45.678096 | orchestrator | Friday 29 August 2025 14:42:45 +0000 (0:00:00.037) 0:00:06.001 ********* 2025-08-29 14:42:45.678118 | orchestrator | =============================================================================== 2025-08-29 14:42:45.678128 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.39s 2025-08-29 14:42:45.678137 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.75s 2025-08-29 14:42:45.678146 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.65s 2025-08-29 14:42:46.038666 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-08-29 14:42:58.212906 | orchestrator | 2025-08-29 14:42:58 | INFO  | Task 5a1fc04f-7f51-40ae-8019-82770748bc1a (wait-for-connection) was prepared for execution. 2025-08-29 14:42:58.213008 | orchestrator | 2025-08-29 14:42:58 | INFO  | It takes a moment until task 5a1fc04f-7f51-40ae-8019-82770748bc1a (wait-for-connection) has been started and output is visible here. 2025-08-29 14:43:14.357803 | orchestrator | 2025-08-29 14:43:14.357956 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-08-29 14:43:14.357973 | orchestrator | 2025-08-29 14:43:14.358011 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-08-29 14:43:14.358163 | orchestrator | Friday 29 August 2025 14:43:02 +0000 (0:00:00.245) 0:00:00.246 ********* 2025-08-29 14:43:14.358177 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:43:14.358190 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:43:14.358201 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:43:14.358212 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:43:14.358223 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:43:14.358234 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:43:14.358246 | orchestrator | 2025-08-29 14:43:14.358257 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:43:14.358269 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:43:14.358283 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:43:14.358295 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:43:14.358307 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:43:14.358320 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:43:14.358332 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:43:14.358372 | orchestrator | 2025-08-29 14:43:14.358384 | orchestrator | 2025-08-29 14:43:14.358396 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:43:14.358409 | orchestrator | Friday 29 August 2025 14:43:13 +0000 (0:00:11.627) 0:00:11.873 ********* 2025-08-29 14:43:14.358421 | orchestrator | =============================================================================== 2025-08-29 14:43:14.358433 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.63s 2025-08-29 14:43:14.677491 | orchestrator | + osism apply hddtemp 2025-08-29 14:43:26.731197 | orchestrator | 2025-08-29 14:43:26 | INFO  | Task bffa2b85-b2d0-4bfd-954a-de4127c2f1b8 (hddtemp) was prepared for execution. 2025-08-29 14:43:26.731304 | orchestrator | 2025-08-29 14:43:26 | INFO  | It takes a moment until task bffa2b85-b2d0-4bfd-954a-de4127c2f1b8 (hddtemp) has been started and output is visible here. 2025-08-29 14:43:55.600782 | orchestrator | 2025-08-29 14:43:55.600899 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-08-29 14:43:55.600916 | orchestrator | 2025-08-29 14:43:55.600928 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-08-29 14:43:55.600939 | orchestrator | Friday 29 August 2025 14:43:30 +0000 (0:00:00.282) 0:00:00.282 ********* 2025-08-29 14:43:55.600950 | orchestrator | ok: [testbed-manager] 2025-08-29 14:43:55.600962 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:43:55.600987 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:43:55.600998 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:43:55.601007 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:43:55.601017 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:43:55.601027 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:43:55.601037 | orchestrator | 2025-08-29 14:43:55.601047 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-08-29 14:43:55.601056 | orchestrator | Friday 29 August 2025 14:43:31 +0000 (0:00:00.726) 0:00:01.008 ********* 2025-08-29 14:43:55.601068 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:43:55.601081 | orchestrator | 2025-08-29 14:43:55.601091 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-08-29 14:43:55.601100 | orchestrator | Friday 29 August 2025 14:43:32 +0000 (0:00:01.208) 0:00:02.217 ********* 2025-08-29 14:43:55.601110 | orchestrator | ok: [testbed-manager] 2025-08-29 14:43:55.601120 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:43:55.601129 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:43:55.601139 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:43:55.601218 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:43:55.601228 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:43:55.601238 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:43:55.601248 | orchestrator | 2025-08-29 14:43:55.601258 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-08-29 14:43:55.601268 | orchestrator | Friday 29 August 2025 14:43:34 +0000 (0:00:01.902) 0:00:04.119 ********* 2025-08-29 14:43:55.601277 | orchestrator | changed: [testbed-manager] 2025-08-29 14:43:55.601288 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:43:55.601298 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:43:55.601308 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:43:55.601319 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:43:55.601329 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:43:55.601340 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:43:55.601350 | orchestrator | 2025-08-29 14:43:55.601361 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-08-29 14:43:55.601372 | orchestrator | Friday 29 August 2025 14:43:35 +0000 (0:00:01.240) 0:00:05.360 ********* 2025-08-29 14:43:55.601383 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:43:55.601394 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:43:55.601427 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:43:55.601438 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:43:55.601448 | orchestrator | ok: [testbed-manager] 2025-08-29 14:43:55.601459 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:43:55.601469 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:43:55.601480 | orchestrator | 2025-08-29 14:43:55.601491 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-08-29 14:43:55.601502 | orchestrator | Friday 29 August 2025 14:43:38 +0000 (0:00:02.114) 0:00:07.475 ********* 2025-08-29 14:43:55.601513 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:43:55.601524 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:43:55.601535 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:43:55.601545 | orchestrator | changed: [testbed-manager] 2025-08-29 14:43:55.601554 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:43:55.601564 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:43:55.601573 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:43:55.601584 | orchestrator | 2025-08-29 14:43:55.601600 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-08-29 14:43:55.601616 | orchestrator | Friday 29 August 2025 14:43:38 +0000 (0:00:00.892) 0:00:08.367 ********* 2025-08-29 14:43:55.601632 | orchestrator | changed: [testbed-manager] 2025-08-29 14:43:55.601647 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:43:55.601663 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:43:55.601678 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:43:55.601693 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:43:55.601708 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:43:55.601724 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:43:55.601740 | orchestrator | 2025-08-29 14:43:55.601758 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-08-29 14:43:55.601774 | orchestrator | Friday 29 August 2025 14:43:51 +0000 (0:00:12.952) 0:00:21.319 ********* 2025-08-29 14:43:55.601792 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:43:55.601803 | orchestrator | 2025-08-29 14:43:55.601812 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-08-29 14:43:55.601822 | orchestrator | Friday 29 August 2025 14:43:53 +0000 (0:00:01.430) 0:00:22.750 ********* 2025-08-29 14:43:55.601831 | orchestrator | changed: [testbed-manager] 2025-08-29 14:43:55.601841 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:43:55.601851 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:43:55.601860 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:43:55.601870 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:43:55.601879 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:43:55.601889 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:43:55.601898 | orchestrator | 2025-08-29 14:43:55.601908 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:43:55.601918 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:43:55.601949 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 14:43:55.601959 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 14:43:55.601976 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 14:43:55.601986 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 14:43:55.601996 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 14:43:55.602074 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 14:43:55.602088 | orchestrator | 2025-08-29 14:43:55.602097 | orchestrator | 2025-08-29 14:43:55.602107 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:43:55.602117 | orchestrator | Friday 29 August 2025 14:43:55 +0000 (0:00:01.916) 0:00:24.667 ********* 2025-08-29 14:43:55.602127 | orchestrator | =============================================================================== 2025-08-29 14:43:55.602136 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.95s 2025-08-29 14:43:55.602167 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 2.11s 2025-08-29 14:43:55.602177 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.92s 2025-08-29 14:43:55.602186 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.90s 2025-08-29 14:43:55.602196 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.43s 2025-08-29 14:43:55.602206 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.24s 2025-08-29 14:43:55.602215 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.21s 2025-08-29 14:43:55.602225 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.89s 2025-08-29 14:43:55.602234 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.73s 2025-08-29 14:43:55.915102 | orchestrator | ++ semver latest 7.1.1 2025-08-29 14:43:55.965721 | orchestrator | + [[ -1 -ge 0 ]] 2025-08-29 14:43:55.965817 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-08-29 14:43:55.965834 | orchestrator | + sudo systemctl restart manager.service 2025-08-29 14:44:09.519633 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-08-29 14:44:09.519743 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-08-29 14:44:09.519758 | orchestrator | + local max_attempts=60 2025-08-29 14:44:09.519771 | orchestrator | + local name=ceph-ansible 2025-08-29 14:44:09.519782 | orchestrator | + local attempt_num=1 2025-08-29 14:44:09.519794 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 14:44:09.552670 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-08-29 14:44:09.552772 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 14:44:09.552788 | orchestrator | + sleep 5 2025-08-29 14:44:14.557123 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 14:44:14.594596 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-08-29 14:44:14.594674 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 14:44:14.594687 | orchestrator | + sleep 5 2025-08-29 14:44:19.597624 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 14:44:19.639641 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-08-29 14:44:19.639724 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 14:44:19.639739 | orchestrator | + sleep 5 2025-08-29 14:44:24.645416 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 14:44:24.687860 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-08-29 14:44:24.687990 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 14:44:24.688006 | orchestrator | + sleep 5 2025-08-29 14:44:29.692946 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 14:44:29.734336 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-08-29 14:44:29.734417 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 14:44:29.734430 | orchestrator | + sleep 5 2025-08-29 14:44:34.739446 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 14:44:34.779025 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-08-29 14:44:34.779104 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 14:44:34.779109 | orchestrator | + sleep 5 2025-08-29 14:44:39.784076 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 14:44:39.824894 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-08-29 14:44:39.824977 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 14:44:39.825016 | orchestrator | + sleep 5 2025-08-29 14:44:44.831797 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 14:44:44.892317 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-08-29 14:44:44.892426 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 14:44:44.892447 | orchestrator | + sleep 5 2025-08-29 14:44:49.904609 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 14:44:50.005601 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-08-29 14:44:50.005700 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 14:44:50.005715 | orchestrator | + sleep 5 2025-08-29 14:44:55.012970 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 14:44:55.050361 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-08-29 14:44:55.050424 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 14:44:55.050438 | orchestrator | + sleep 5 2025-08-29 14:45:00.056109 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 14:45:00.099712 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-08-29 14:45:00.099794 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 14:45:00.099808 | orchestrator | + sleep 5 2025-08-29 14:45:05.104274 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 14:45:05.142679 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-08-29 14:45:05.142756 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 14:45:05.142768 | orchestrator | + sleep 5 2025-08-29 14:45:10.147864 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 14:45:10.191359 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-08-29 14:45:10.191420 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-08-29 14:45:10.191428 | orchestrator | + sleep 5 2025-08-29 14:45:15.197562 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-08-29 14:45:15.229991 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-08-29 14:45:15.230087 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-08-29 14:45:15.230246 | orchestrator | + local max_attempts=60 2025-08-29 14:45:15.230257 | orchestrator | + local name=kolla-ansible 2025-08-29 14:45:15.230263 | orchestrator | + local attempt_num=1 2025-08-29 14:45:15.230435 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-08-29 14:45:15.263596 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-08-29 14:45:15.263657 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-08-29 14:45:15.263664 | orchestrator | + local max_attempts=60 2025-08-29 14:45:15.263671 | orchestrator | + local name=osism-ansible 2025-08-29 14:45:15.263676 | orchestrator | + local attempt_num=1 2025-08-29 14:45:15.263947 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-08-29 14:45:15.297556 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-08-29 14:45:15.297681 | orchestrator | + [[ true == \t\r\u\e ]] 2025-08-29 14:45:15.297698 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-08-29 14:45:15.474520 | orchestrator | ARA in ceph-ansible already disabled. 2025-08-29 14:45:15.643679 | orchestrator | ARA in kolla-ansible already disabled. 2025-08-29 14:45:15.791931 | orchestrator | ARA in osism-ansible already disabled. 2025-08-29 14:45:15.943924 | orchestrator | ARA in osism-kubernetes already disabled. 2025-08-29 14:45:15.944489 | orchestrator | + osism apply gather-facts 2025-08-29 14:45:28.160783 | orchestrator | 2025-08-29 14:45:28 | INFO  | Task 592f0f5b-f832-4ac7-a321-f6abb23e8559 (gather-facts) was prepared for execution. 2025-08-29 14:45:28.160877 | orchestrator | 2025-08-29 14:45:28 | INFO  | It takes a moment until task 592f0f5b-f832-4ac7-a321-f6abb23e8559 (gather-facts) has been started and output is visible here. 2025-08-29 14:45:41.847199 | orchestrator | 2025-08-29 14:45:41.847375 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-08-29 14:45:41.847395 | orchestrator | 2025-08-29 14:45:41.847407 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-08-29 14:45:41.847419 | orchestrator | Friday 29 August 2025 14:45:32 +0000 (0:00:00.236) 0:00:00.236 ********* 2025-08-29 14:45:41.847431 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:45:41.847443 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:45:41.847454 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:45:41.847465 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:45:41.847503 | orchestrator | ok: [testbed-manager] 2025-08-29 14:45:41.847514 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:45:41.847525 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:45:41.847535 | orchestrator | 2025-08-29 14:45:41.847546 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-08-29 14:45:41.847557 | orchestrator | 2025-08-29 14:45:41.847568 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-08-29 14:45:41.847579 | orchestrator | Friday 29 August 2025 14:45:40 +0000 (0:00:08.544) 0:00:08.781 ********* 2025-08-29 14:45:41.847589 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:45:41.847601 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:45:41.847611 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:45:41.847622 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:45:41.847632 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:45:41.847643 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:45:41.847653 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:45:41.847664 | orchestrator | 2025-08-29 14:45:41.847674 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:45:41.847685 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 14:45:41.847697 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 14:45:41.847708 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 14:45:41.847719 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 14:45:41.847730 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 14:45:41.847742 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 14:45:41.847754 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 14:45:41.847766 | orchestrator | 2025-08-29 14:45:41.847778 | orchestrator | 2025-08-29 14:45:41.847790 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:45:41.847802 | orchestrator | Friday 29 August 2025 14:45:41 +0000 (0:00:00.564) 0:00:09.346 ********* 2025-08-29 14:45:41.847815 | orchestrator | =============================================================================== 2025-08-29 14:45:41.847826 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.55s 2025-08-29 14:45:41.847838 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.56s 2025-08-29 14:45:42.139837 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-08-29 14:45:42.160676 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-08-29 14:45:42.172922 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-08-29 14:45:42.185155 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-08-29 14:45:42.196936 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-08-29 14:45:42.209419 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-08-29 14:45:42.226895 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-08-29 14:45:42.241753 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-08-29 14:45:42.254480 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-08-29 14:45:42.272158 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-08-29 14:45:42.284014 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-08-29 14:45:42.296392 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-08-29 14:45:42.308542 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-08-29 14:45:42.324750 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-08-29 14:45:42.336777 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-08-29 14:45:42.356395 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-08-29 14:45:42.375429 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-08-29 14:45:42.395734 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-08-29 14:45:42.413884 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-08-29 14:45:42.432786 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-08-29 14:45:42.453346 | orchestrator | + [[ false == \t\r\u\e ]] 2025-08-29 14:45:42.805996 | orchestrator | ok: Runtime: 0:24:15.022354 2025-08-29 14:45:42.920979 | 2025-08-29 14:45:42.921131 | TASK [Deploy services] 2025-08-29 14:45:43.453466 | orchestrator | skipping: Conditional result was False 2025-08-29 14:45:43.470798 | 2025-08-29 14:45:43.470995 | TASK [Deploy in a nutshell] 2025-08-29 14:45:44.170928 | orchestrator | 2025-08-29 14:45:44.171087 | orchestrator | # PULL IMAGES 2025-08-29 14:45:44.171120 | orchestrator | + set -e 2025-08-29 14:45:44.171129 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-08-29 14:45:44.171140 | orchestrator | ++ export INTERACTIVE=false 2025-08-29 14:45:44.171148 | orchestrator | ++ INTERACTIVE=false 2025-08-29 14:45:44.171153 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-08-29 14:45:44.171180 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-08-29 14:45:44.171190 | orchestrator | + source /opt/manager-vars.sh 2025-08-29 14:45:44.171196 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-08-29 14:45:44.171204 | orchestrator | ++ NUMBER_OF_NODES=6 2025-08-29 14:45:44.171208 | orchestrator | ++ export CEPH_VERSION=reef 2025-08-29 14:45:44.171243 | orchestrator | ++ CEPH_VERSION=reef 2025-08-29 14:45:44.171248 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-08-29 14:45:44.171256 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-08-29 14:45:44.171260 | orchestrator | ++ export MANAGER_VERSION=latest 2025-08-29 14:45:44.171267 | orchestrator | ++ MANAGER_VERSION=latest 2025-08-29 14:45:44.171271 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-08-29 14:45:44.171276 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-08-29 14:45:44.171280 | orchestrator | ++ export ARA=false 2025-08-29 14:45:44.171284 | orchestrator | ++ ARA=false 2025-08-29 14:45:44.171287 | orchestrator | ++ export DEPLOY_MODE=manager 2025-08-29 14:45:44.171291 | orchestrator | ++ DEPLOY_MODE=manager 2025-08-29 14:45:44.171295 | orchestrator | ++ export TEMPEST=false 2025-08-29 14:45:44.171299 | orchestrator | ++ TEMPEST=false 2025-08-29 14:45:44.171302 | orchestrator | ++ export IS_ZUUL=true 2025-08-29 14:45:44.171306 | orchestrator | ++ IS_ZUUL=true 2025-08-29 14:45:44.171310 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.226 2025-08-29 14:45:44.171314 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.226 2025-08-29 14:45:44.171318 | orchestrator | ++ export EXTERNAL_API=false 2025-08-29 14:45:44.171322 | orchestrator | ++ EXTERNAL_API=false 2025-08-29 14:45:44.171325 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-08-29 14:45:44.171329 | orchestrator | ++ IMAGE_USER=ubuntu 2025-08-29 14:45:44.171333 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-08-29 14:45:44.171337 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-08-29 14:45:44.171340 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-08-29 14:45:44.171348 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-08-29 14:45:44.171353 | orchestrator | + echo 2025-08-29 14:45:44.171356 | orchestrator | + echo '# PULL IMAGES' 2025-08-29 14:45:44.171360 | orchestrator | + echo 2025-08-29 14:45:44.171392 | orchestrator | 2025-08-29 14:45:44.172598 | orchestrator | ++ semver latest 7.0.0 2025-08-29 14:45:44.236911 | orchestrator | + [[ -1 -ge 0 ]] 2025-08-29 14:45:44.237014 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-08-29 14:45:44.237029 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2025-08-29 14:45:46.167379 | orchestrator | 2025-08-29 14:45:46 | INFO  | Trying to run play pull-images in environment custom 2025-08-29 14:45:56.396823 | orchestrator | 2025-08-29 14:45:56 | INFO  | Task ac7094c4-18b7-41c0-be8a-4df4dbd63e92 (pull-images) was prepared for execution. 2025-08-29 14:45:56.396905 | orchestrator | 2025-08-29 14:45:56 | INFO  | Task ac7094c4-18b7-41c0-be8a-4df4dbd63e92 is running in background. No more output. Check ARA for logs. 2025-08-29 14:45:58.772807 | orchestrator | 2025-08-29 14:45:58 | INFO  | Trying to run play wipe-partitions in environment custom 2025-08-29 14:46:08.965473 | orchestrator | 2025-08-29 14:46:08 | INFO  | Task 0280401e-810c-448e-8187-88009517403d (wipe-partitions) was prepared for execution. 2025-08-29 14:46:08.965591 | orchestrator | 2025-08-29 14:46:08 | INFO  | It takes a moment until task 0280401e-810c-448e-8187-88009517403d (wipe-partitions) has been started and output is visible here. 2025-08-29 14:46:22.135738 | orchestrator | 2025-08-29 14:46:22.135881 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-08-29 14:46:22.135897 | orchestrator | 2025-08-29 14:46:22.135909 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-08-29 14:46:22.135926 | orchestrator | Friday 29 August 2025 14:46:13 +0000 (0:00:00.121) 0:00:00.121 ********* 2025-08-29 14:46:22.135941 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:46:22.135954 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:46:22.135965 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:46:22.135976 | orchestrator | 2025-08-29 14:46:22.135987 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-08-29 14:46:22.136027 | orchestrator | Friday 29 August 2025 14:46:13 +0000 (0:00:00.556) 0:00:00.678 ********* 2025-08-29 14:46:22.136039 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:46:22.136051 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:46:22.136067 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:22.136078 | orchestrator | 2025-08-29 14:46:22.136089 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-08-29 14:46:22.136100 | orchestrator | Friday 29 August 2025 14:46:13 +0000 (0:00:00.228) 0:00:00.907 ********* 2025-08-29 14:46:22.136110 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:46:22.136122 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:46:22.136133 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:46:22.136144 | orchestrator | 2025-08-29 14:46:22.136155 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-08-29 14:46:22.136166 | orchestrator | Friday 29 August 2025 14:46:14 +0000 (0:00:00.677) 0:00:01.584 ********* 2025-08-29 14:46:22.136177 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:46:22.136187 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:46:22.136198 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:22.136208 | orchestrator | 2025-08-29 14:46:22.136219 | orchestrator | TASK [Check device availability] *********************************************** 2025-08-29 14:46:22.136231 | orchestrator | Friday 29 August 2025 14:46:14 +0000 (0:00:00.274) 0:00:01.859 ********* 2025-08-29 14:46:22.136286 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-08-29 14:46:22.136304 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-08-29 14:46:22.136316 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-08-29 14:46:22.136329 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-08-29 14:46:22.136341 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-08-29 14:46:22.136353 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-08-29 14:46:22.136365 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-08-29 14:46:22.136377 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-08-29 14:46:22.136390 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-08-29 14:46:22.136402 | orchestrator | 2025-08-29 14:46:22.136414 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-08-29 14:46:22.136428 | orchestrator | Friday 29 August 2025 14:46:16 +0000 (0:00:01.994) 0:00:03.854 ********* 2025-08-29 14:46:22.136441 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-08-29 14:46:22.136454 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-08-29 14:46:22.136466 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-08-29 14:46:22.136478 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-08-29 14:46:22.136491 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-08-29 14:46:22.136503 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-08-29 14:46:22.136515 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-08-29 14:46:22.136527 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-08-29 14:46:22.136539 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-08-29 14:46:22.136552 | orchestrator | 2025-08-29 14:46:22.136565 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-08-29 14:46:22.136578 | orchestrator | Friday 29 August 2025 14:46:18 +0000 (0:00:01.348) 0:00:05.202 ********* 2025-08-29 14:46:22.136589 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-08-29 14:46:22.136600 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-08-29 14:46:22.136611 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-08-29 14:46:22.136622 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-08-29 14:46:22.136632 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-08-29 14:46:22.136649 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-08-29 14:46:22.136660 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-08-29 14:46:22.136679 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-08-29 14:46:22.136690 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-08-29 14:46:22.136701 | orchestrator | 2025-08-29 14:46:22.136712 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-08-29 14:46:22.136723 | orchestrator | Friday 29 August 2025 14:46:20 +0000 (0:00:02.310) 0:00:07.513 ********* 2025-08-29 14:46:22.136733 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:46:22.136744 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:46:22.136755 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:46:22.136765 | orchestrator | 2025-08-29 14:46:22.136776 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-08-29 14:46:22.136787 | orchestrator | Friday 29 August 2025 14:46:21 +0000 (0:00:00.607) 0:00:08.121 ********* 2025-08-29 14:46:22.136798 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:46:22.136809 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:46:22.136820 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:46:22.136830 | orchestrator | 2025-08-29 14:46:22.136841 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:46:22.136855 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:46:22.136867 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:46:22.136901 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:46:22.136913 | orchestrator | 2025-08-29 14:46:22.136924 | orchestrator | 2025-08-29 14:46:22.136935 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:46:22.136945 | orchestrator | Friday 29 August 2025 14:46:21 +0000 (0:00:00.624) 0:00:08.745 ********* 2025-08-29 14:46:22.136956 | orchestrator | =============================================================================== 2025-08-29 14:46:22.136967 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.31s 2025-08-29 14:46:22.136978 | orchestrator | Check device availability ----------------------------------------------- 1.99s 2025-08-29 14:46:22.136989 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.35s 2025-08-29 14:46:22.136999 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.68s 2025-08-29 14:46:22.137010 | orchestrator | Request device events from the kernel ----------------------------------- 0.62s 2025-08-29 14:46:22.137021 | orchestrator | Reload udev rules ------------------------------------------------------- 0.61s 2025-08-29 14:46:22.137031 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.56s 2025-08-29 14:46:22.137042 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.27s 2025-08-29 14:46:22.137053 | orchestrator | Remove all rook related logical devices --------------------------------- 0.23s 2025-08-29 14:46:34.446232 | orchestrator | 2025-08-29 14:46:34 | INFO  | Task 83c72810-4596-41cb-93b9-cadb55d9f7d8 (facts) was prepared for execution. 2025-08-29 14:46:34.446415 | orchestrator | 2025-08-29 14:46:34 | INFO  | It takes a moment until task 83c72810-4596-41cb-93b9-cadb55d9f7d8 (facts) has been started and output is visible here. 2025-08-29 14:46:46.724077 | orchestrator | 2025-08-29 14:46:46.724212 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-08-29 14:46:46.724230 | orchestrator | 2025-08-29 14:46:46.724242 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-08-29 14:46:46.724280 | orchestrator | Friday 29 August 2025 14:46:38 +0000 (0:00:00.279) 0:00:00.279 ********* 2025-08-29 14:46:46.724293 | orchestrator | ok: [testbed-manager] 2025-08-29 14:46:46.724306 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:46:46.724317 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:46:46.724366 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:46:46.724378 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:46:46.724389 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:46:46.724399 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:46:46.724411 | orchestrator | 2025-08-29 14:46:46.724425 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-08-29 14:46:46.724436 | orchestrator | Friday 29 August 2025 14:46:39 +0000 (0:00:01.087) 0:00:01.367 ********* 2025-08-29 14:46:46.724447 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:46:46.724459 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:46:46.724470 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:46:46.724480 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:46:46.724491 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:46:46.724502 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:46:46.724513 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:46.724523 | orchestrator | 2025-08-29 14:46:46.724534 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-08-29 14:46:46.724545 | orchestrator | 2025-08-29 14:46:46.724556 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-08-29 14:46:46.724567 | orchestrator | Friday 29 August 2025 14:46:41 +0000 (0:00:01.273) 0:00:02.641 ********* 2025-08-29 14:46:46.724578 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:46:46.724589 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:46:46.724600 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:46:46.724613 | orchestrator | ok: [testbed-manager] 2025-08-29 14:46:46.724626 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:46:46.724637 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:46:46.724649 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:46:46.724676 | orchestrator | 2025-08-29 14:46:46.724700 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-08-29 14:46:46.724712 | orchestrator | 2025-08-29 14:46:46.724723 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-08-29 14:46:46.724755 | orchestrator | Friday 29 August 2025 14:46:45 +0000 (0:00:04.582) 0:00:07.223 ********* 2025-08-29 14:46:46.724769 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:46:46.724780 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:46:46.724792 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:46:46.724804 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:46:46.724816 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:46:46.724828 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:46:46.724841 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:46:46.724853 | orchestrator | 2025-08-29 14:46:46.724865 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:46:46.724878 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:46:46.724891 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:46:46.724904 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:46:46.724915 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:46:46.724927 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:46:46.724940 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:46:46.724952 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:46:46.724964 | orchestrator | 2025-08-29 14:46:46.724985 | orchestrator | 2025-08-29 14:46:46.724996 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:46:46.725007 | orchestrator | Friday 29 August 2025 14:46:46 +0000 (0:00:00.723) 0:00:07.946 ********* 2025-08-29 14:46:46.725018 | orchestrator | =============================================================================== 2025-08-29 14:46:46.725029 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.58s 2025-08-29 14:46:46.725040 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.27s 2025-08-29 14:46:46.725050 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.09s 2025-08-29 14:46:46.725061 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.72s 2025-08-29 14:46:49.137690 | orchestrator | 2025-08-29 14:46:49 | INFO  | Task 9ba2c4fa-e62a-4dfc-9fe7-b307e1b90fa9 (ceph-configure-lvm-volumes) was prepared for execution. 2025-08-29 14:46:49.137848 | orchestrator | 2025-08-29 14:46:49 | INFO  | It takes a moment until task 9ba2c4fa-e62a-4dfc-9fe7-b307e1b90fa9 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-08-29 14:47:00.952213 | orchestrator | 2025-08-29 14:47:00.952423 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-08-29 14:47:00.952443 | orchestrator | 2025-08-29 14:47:00.952498 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-08-29 14:47:00.952518 | orchestrator | Friday 29 August 2025 14:46:53 +0000 (0:00:00.343) 0:00:00.343 ********* 2025-08-29 14:47:00.952531 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-08-29 14:47:00.952543 | orchestrator | 2025-08-29 14:47:00.952555 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-08-29 14:47:00.952566 | orchestrator | Friday 29 August 2025 14:46:53 +0000 (0:00:00.245) 0:00:00.589 ********* 2025-08-29 14:47:00.952577 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:47:00.952590 | orchestrator | 2025-08-29 14:47:00.952601 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:00.952612 | orchestrator | Friday 29 August 2025 14:46:54 +0000 (0:00:00.293) 0:00:00.882 ********* 2025-08-29 14:47:00.952623 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-08-29 14:47:00.952635 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-08-29 14:47:00.952646 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-08-29 14:47:00.952657 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-08-29 14:47:00.952668 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-08-29 14:47:00.952678 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-08-29 14:47:00.952689 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-08-29 14:47:00.952700 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-08-29 14:47:00.952713 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-08-29 14:47:00.952727 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-08-29 14:47:00.952739 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-08-29 14:47:00.952762 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-08-29 14:47:00.952775 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-08-29 14:47:00.952787 | orchestrator | 2025-08-29 14:47:00.952800 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:00.952813 | orchestrator | Friday 29 August 2025 14:46:54 +0000 (0:00:00.373) 0:00:01.256 ********* 2025-08-29 14:47:00.952826 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:00.952866 | orchestrator | 2025-08-29 14:47:00.952878 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:00.952890 | orchestrator | Friday 29 August 2025 14:46:54 +0000 (0:00:00.545) 0:00:01.801 ********* 2025-08-29 14:47:00.952902 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:00.952915 | orchestrator | 2025-08-29 14:47:00.952927 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:00.952940 | orchestrator | Friday 29 August 2025 14:46:55 +0000 (0:00:00.189) 0:00:01.991 ********* 2025-08-29 14:47:00.952952 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:00.952965 | orchestrator | 2025-08-29 14:47:00.952978 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:00.952989 | orchestrator | Friday 29 August 2025 14:46:55 +0000 (0:00:00.201) 0:00:02.192 ********* 2025-08-29 14:47:00.953001 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:00.953018 | orchestrator | 2025-08-29 14:47:00.953031 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:00.953043 | orchestrator | Friday 29 August 2025 14:46:55 +0000 (0:00:00.181) 0:00:02.373 ********* 2025-08-29 14:47:00.953056 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:00.953068 | orchestrator | 2025-08-29 14:47:00.953080 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:00.953092 | orchestrator | Friday 29 August 2025 14:46:55 +0000 (0:00:00.202) 0:00:02.576 ********* 2025-08-29 14:47:00.953102 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:00.953113 | orchestrator | 2025-08-29 14:47:00.953124 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:00.953135 | orchestrator | Friday 29 August 2025 14:46:55 +0000 (0:00:00.188) 0:00:02.764 ********* 2025-08-29 14:47:00.953146 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:00.953157 | orchestrator | 2025-08-29 14:47:00.953167 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:00.953178 | orchestrator | Friday 29 August 2025 14:46:56 +0000 (0:00:00.191) 0:00:02.956 ********* 2025-08-29 14:47:00.953189 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:00.953200 | orchestrator | 2025-08-29 14:47:00.953211 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:00.953221 | orchestrator | Friday 29 August 2025 14:46:56 +0000 (0:00:00.186) 0:00:03.143 ********* 2025-08-29 14:47:00.953232 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_12058b5d-7e0f-4769-b570-e8724a20121a) 2025-08-29 14:47:00.953244 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_12058b5d-7e0f-4769-b570-e8724a20121a) 2025-08-29 14:47:00.953255 | orchestrator | 2025-08-29 14:47:00.953287 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:00.953299 | orchestrator | Friday 29 August 2025 14:46:56 +0000 (0:00:00.368) 0:00:03.511 ********* 2025-08-29 14:47:00.953333 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_8e840163-cd15-4bab-ac0d-7731db5a26c7) 2025-08-29 14:47:00.953345 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_8e840163-cd15-4bab-ac0d-7731db5a26c7) 2025-08-29 14:47:00.953356 | orchestrator | 2025-08-29 14:47:00.953367 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:00.953377 | orchestrator | Friday 29 August 2025 14:46:57 +0000 (0:00:00.383) 0:00:03.895 ********* 2025-08-29 14:47:00.953388 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b50f501b-7dcc-49bb-af34-bcea70be6a61) 2025-08-29 14:47:00.953399 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b50f501b-7dcc-49bb-af34-bcea70be6a61) 2025-08-29 14:47:00.953409 | orchestrator | 2025-08-29 14:47:00.953420 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:00.953431 | orchestrator | Friday 29 August 2025 14:46:57 +0000 (0:00:00.582) 0:00:04.478 ********* 2025-08-29 14:47:00.953441 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_34b7b0aa-9c3f-4af7-b9a4-6261675e7012) 2025-08-29 14:47:00.953459 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_34b7b0aa-9c3f-4af7-b9a4-6261675e7012) 2025-08-29 14:47:00.953470 | orchestrator | 2025-08-29 14:47:00.953481 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:00.953492 | orchestrator | Friday 29 August 2025 14:46:58 +0000 (0:00:00.550) 0:00:05.028 ********* 2025-08-29 14:47:00.953502 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-08-29 14:47:00.953513 | orchestrator | 2025-08-29 14:47:00.953524 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:00.953540 | orchestrator | Friday 29 August 2025 14:46:58 +0000 (0:00:00.690) 0:00:05.719 ********* 2025-08-29 14:47:00.953552 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-08-29 14:47:00.953562 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-08-29 14:47:00.953573 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-08-29 14:47:00.953584 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-08-29 14:47:00.953594 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-08-29 14:47:00.953605 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-08-29 14:47:00.953615 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-08-29 14:47:00.953626 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-08-29 14:47:00.953637 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-08-29 14:47:00.953647 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-08-29 14:47:00.953657 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-08-29 14:47:00.953668 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-08-29 14:47:00.953679 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-08-29 14:47:00.953690 | orchestrator | 2025-08-29 14:47:00.953700 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:00.953711 | orchestrator | Friday 29 August 2025 14:46:59 +0000 (0:00:00.381) 0:00:06.100 ********* 2025-08-29 14:47:00.953722 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:00.953732 | orchestrator | 2025-08-29 14:47:00.953743 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:00.953754 | orchestrator | Friday 29 August 2025 14:46:59 +0000 (0:00:00.204) 0:00:06.305 ********* 2025-08-29 14:47:00.953764 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:00.953775 | orchestrator | 2025-08-29 14:47:00.953785 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:00.953796 | orchestrator | Friday 29 August 2025 14:46:59 +0000 (0:00:00.218) 0:00:06.524 ********* 2025-08-29 14:47:00.953807 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:00.953817 | orchestrator | 2025-08-29 14:47:00.953828 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:00.953839 | orchestrator | Friday 29 August 2025 14:46:59 +0000 (0:00:00.203) 0:00:06.727 ********* 2025-08-29 14:47:00.953849 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:00.953860 | orchestrator | 2025-08-29 14:47:00.953871 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:00.953882 | orchestrator | Friday 29 August 2025 14:47:00 +0000 (0:00:00.234) 0:00:06.962 ********* 2025-08-29 14:47:00.953892 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:00.953903 | orchestrator | 2025-08-29 14:47:00.953924 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:00.953935 | orchestrator | Friday 29 August 2025 14:47:00 +0000 (0:00:00.217) 0:00:07.180 ********* 2025-08-29 14:47:00.953945 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:00.953956 | orchestrator | 2025-08-29 14:47:00.953966 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:00.953977 | orchestrator | Friday 29 August 2025 14:47:00 +0000 (0:00:00.213) 0:00:07.393 ********* 2025-08-29 14:47:00.953988 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:00.953998 | orchestrator | 2025-08-29 14:47:00.954009 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:00.954084 | orchestrator | Friday 29 August 2025 14:47:00 +0000 (0:00:00.199) 0:00:07.592 ********* 2025-08-29 14:47:00.954104 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:08.091633 | orchestrator | 2025-08-29 14:47:08.091744 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:08.091762 | orchestrator | Friday 29 August 2025 14:47:00 +0000 (0:00:00.219) 0:00:07.812 ********* 2025-08-29 14:47:08.091774 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-08-29 14:47:08.091787 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-08-29 14:47:08.091798 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-08-29 14:47:08.091809 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-08-29 14:47:08.091820 | orchestrator | 2025-08-29 14:47:08.091831 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:08.091842 | orchestrator | Friday 29 August 2025 14:47:01 +0000 (0:00:01.002) 0:00:08.814 ********* 2025-08-29 14:47:08.091854 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:08.091865 | orchestrator | 2025-08-29 14:47:08.091875 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:08.091886 | orchestrator | Friday 29 August 2025 14:47:02 +0000 (0:00:00.206) 0:00:09.021 ********* 2025-08-29 14:47:08.091897 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:08.091908 | orchestrator | 2025-08-29 14:47:08.091919 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:08.091930 | orchestrator | Friday 29 August 2025 14:47:02 +0000 (0:00:00.200) 0:00:09.221 ********* 2025-08-29 14:47:08.091941 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:08.091952 | orchestrator | 2025-08-29 14:47:08.091962 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:08.091974 | orchestrator | Friday 29 August 2025 14:47:02 +0000 (0:00:00.193) 0:00:09.414 ********* 2025-08-29 14:47:08.091984 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:08.091995 | orchestrator | 2025-08-29 14:47:08.092006 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-08-29 14:47:08.092017 | orchestrator | Friday 29 August 2025 14:47:02 +0000 (0:00:00.185) 0:00:09.600 ********* 2025-08-29 14:47:08.092028 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-08-29 14:47:08.092039 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-08-29 14:47:08.092050 | orchestrator | 2025-08-29 14:47:08.092061 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-08-29 14:47:08.092072 | orchestrator | Friday 29 August 2025 14:47:02 +0000 (0:00:00.157) 0:00:09.757 ********* 2025-08-29 14:47:08.092102 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:08.092114 | orchestrator | 2025-08-29 14:47:08.092125 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-08-29 14:47:08.092136 | orchestrator | Friday 29 August 2025 14:47:03 +0000 (0:00:00.141) 0:00:09.899 ********* 2025-08-29 14:47:08.092147 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:08.092158 | orchestrator | 2025-08-29 14:47:08.092169 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-08-29 14:47:08.092181 | orchestrator | Friday 29 August 2025 14:47:03 +0000 (0:00:00.135) 0:00:10.034 ********* 2025-08-29 14:47:08.092194 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:08.092228 | orchestrator | 2025-08-29 14:47:08.092242 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-08-29 14:47:08.092254 | orchestrator | Friday 29 August 2025 14:47:03 +0000 (0:00:00.150) 0:00:10.184 ********* 2025-08-29 14:47:08.092288 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:47:08.092302 | orchestrator | 2025-08-29 14:47:08.092315 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-08-29 14:47:08.092328 | orchestrator | Friday 29 August 2025 14:47:03 +0000 (0:00:00.122) 0:00:10.307 ********* 2025-08-29 14:47:08.092341 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4c2f47a1-6693-5b64-9c97-de0e0041f7f6'}}) 2025-08-29 14:47:08.092355 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '218f7b56-b785-5eaf-b35f-b0ddc87960c6'}}) 2025-08-29 14:47:08.092367 | orchestrator | 2025-08-29 14:47:08.092381 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-08-29 14:47:08.092395 | orchestrator | Friday 29 August 2025 14:47:03 +0000 (0:00:00.166) 0:00:10.474 ********* 2025-08-29 14:47:08.092408 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4c2f47a1-6693-5b64-9c97-de0e0041f7f6'}})  2025-08-29 14:47:08.092429 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '218f7b56-b785-5eaf-b35f-b0ddc87960c6'}})  2025-08-29 14:47:08.092442 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:08.092455 | orchestrator | 2025-08-29 14:47:08.092467 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-08-29 14:47:08.092481 | orchestrator | Friday 29 August 2025 14:47:03 +0000 (0:00:00.145) 0:00:10.620 ********* 2025-08-29 14:47:08.092494 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4c2f47a1-6693-5b64-9c97-de0e0041f7f6'}})  2025-08-29 14:47:08.092507 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '218f7b56-b785-5eaf-b35f-b0ddc87960c6'}})  2025-08-29 14:47:08.092520 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:08.092533 | orchestrator | 2025-08-29 14:47:08.092544 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-08-29 14:47:08.092555 | orchestrator | Friday 29 August 2025 14:47:04 +0000 (0:00:00.284) 0:00:10.904 ********* 2025-08-29 14:47:08.092566 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4c2f47a1-6693-5b64-9c97-de0e0041f7f6'}})  2025-08-29 14:47:08.092577 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '218f7b56-b785-5eaf-b35f-b0ddc87960c6'}})  2025-08-29 14:47:08.092588 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:08.092599 | orchestrator | 2025-08-29 14:47:08.092626 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-08-29 14:47:08.092638 | orchestrator | Friday 29 August 2025 14:47:04 +0000 (0:00:00.138) 0:00:11.042 ********* 2025-08-29 14:47:08.092649 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:47:08.092660 | orchestrator | 2025-08-29 14:47:08.092671 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-08-29 14:47:08.092688 | orchestrator | Friday 29 August 2025 14:47:04 +0000 (0:00:00.151) 0:00:11.193 ********* 2025-08-29 14:47:08.092699 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:47:08.092710 | orchestrator | 2025-08-29 14:47:08.092721 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-08-29 14:47:08.092732 | orchestrator | Friday 29 August 2025 14:47:04 +0000 (0:00:00.130) 0:00:11.324 ********* 2025-08-29 14:47:08.092743 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:08.092754 | orchestrator | 2025-08-29 14:47:08.092765 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-08-29 14:47:08.092776 | orchestrator | Friday 29 August 2025 14:47:04 +0000 (0:00:00.117) 0:00:11.442 ********* 2025-08-29 14:47:08.092787 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:08.092798 | orchestrator | 2025-08-29 14:47:08.092816 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-08-29 14:47:08.092827 | orchestrator | Friday 29 August 2025 14:47:04 +0000 (0:00:00.124) 0:00:11.566 ********* 2025-08-29 14:47:08.092838 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:08.092849 | orchestrator | 2025-08-29 14:47:08.092860 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-08-29 14:47:08.092871 | orchestrator | Friday 29 August 2025 14:47:04 +0000 (0:00:00.141) 0:00:11.708 ********* 2025-08-29 14:47:08.092882 | orchestrator | ok: [testbed-node-3] => { 2025-08-29 14:47:08.092893 | orchestrator |  "ceph_osd_devices": { 2025-08-29 14:47:08.092904 | orchestrator |  "sdb": { 2025-08-29 14:47:08.092916 | orchestrator |  "osd_lvm_uuid": "4c2f47a1-6693-5b64-9c97-de0e0041f7f6" 2025-08-29 14:47:08.092927 | orchestrator |  }, 2025-08-29 14:47:08.092939 | orchestrator |  "sdc": { 2025-08-29 14:47:08.092950 | orchestrator |  "osd_lvm_uuid": "218f7b56-b785-5eaf-b35f-b0ddc87960c6" 2025-08-29 14:47:08.092960 | orchestrator |  } 2025-08-29 14:47:08.092971 | orchestrator |  } 2025-08-29 14:47:08.092982 | orchestrator | } 2025-08-29 14:47:08.092993 | orchestrator | 2025-08-29 14:47:08.093004 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-08-29 14:47:08.093015 | orchestrator | Friday 29 August 2025 14:47:04 +0000 (0:00:00.138) 0:00:11.847 ********* 2025-08-29 14:47:08.093026 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:08.093037 | orchestrator | 2025-08-29 14:47:08.093048 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-08-29 14:47:08.093059 | orchestrator | Friday 29 August 2025 14:47:05 +0000 (0:00:00.140) 0:00:11.987 ********* 2025-08-29 14:47:08.093070 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:08.093080 | orchestrator | 2025-08-29 14:47:08.093091 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-08-29 14:47:08.093102 | orchestrator | Friday 29 August 2025 14:47:05 +0000 (0:00:00.152) 0:00:12.139 ********* 2025-08-29 14:47:08.093113 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:47:08.093124 | orchestrator | 2025-08-29 14:47:08.093135 | orchestrator | TASK [Print configuration data] ************************************************ 2025-08-29 14:47:08.093146 | orchestrator | Friday 29 August 2025 14:47:05 +0000 (0:00:00.133) 0:00:12.273 ********* 2025-08-29 14:47:08.093156 | orchestrator | changed: [testbed-node-3] => { 2025-08-29 14:47:08.093167 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-08-29 14:47:08.093178 | orchestrator |  "ceph_osd_devices": { 2025-08-29 14:47:08.093189 | orchestrator |  "sdb": { 2025-08-29 14:47:08.093200 | orchestrator |  "osd_lvm_uuid": "4c2f47a1-6693-5b64-9c97-de0e0041f7f6" 2025-08-29 14:47:08.093211 | orchestrator |  }, 2025-08-29 14:47:08.093222 | orchestrator |  "sdc": { 2025-08-29 14:47:08.093233 | orchestrator |  "osd_lvm_uuid": "218f7b56-b785-5eaf-b35f-b0ddc87960c6" 2025-08-29 14:47:08.093244 | orchestrator |  } 2025-08-29 14:47:08.093255 | orchestrator |  }, 2025-08-29 14:47:08.093280 | orchestrator |  "lvm_volumes": [ 2025-08-29 14:47:08.093292 | orchestrator |  { 2025-08-29 14:47:08.093303 | orchestrator |  "data": "osd-block-4c2f47a1-6693-5b64-9c97-de0e0041f7f6", 2025-08-29 14:47:08.093314 | orchestrator |  "data_vg": "ceph-4c2f47a1-6693-5b64-9c97-de0e0041f7f6" 2025-08-29 14:47:08.093325 | orchestrator |  }, 2025-08-29 14:47:08.093336 | orchestrator |  { 2025-08-29 14:47:08.093347 | orchestrator |  "data": "osd-block-218f7b56-b785-5eaf-b35f-b0ddc87960c6", 2025-08-29 14:47:08.093358 | orchestrator |  "data_vg": "ceph-218f7b56-b785-5eaf-b35f-b0ddc87960c6" 2025-08-29 14:47:08.093368 | orchestrator |  } 2025-08-29 14:47:08.093379 | orchestrator |  ] 2025-08-29 14:47:08.093390 | orchestrator |  } 2025-08-29 14:47:08.093401 | orchestrator | } 2025-08-29 14:47:08.093412 | orchestrator | 2025-08-29 14:47:08.093423 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-08-29 14:47:08.093447 | orchestrator | Friday 29 August 2025 14:47:05 +0000 (0:00:00.284) 0:00:12.557 ********* 2025-08-29 14:47:08.093458 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-08-29 14:47:08.093469 | orchestrator | 2025-08-29 14:47:08.093480 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-08-29 14:47:08.093491 | orchestrator | 2025-08-29 14:47:08.093502 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-08-29 14:47:08.093513 | orchestrator | Friday 29 August 2025 14:47:07 +0000 (0:00:01.908) 0:00:14.466 ********* 2025-08-29 14:47:08.093524 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-08-29 14:47:08.093534 | orchestrator | 2025-08-29 14:47:08.093545 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-08-29 14:47:08.093556 | orchestrator | Friday 29 August 2025 14:47:07 +0000 (0:00:00.265) 0:00:14.731 ********* 2025-08-29 14:47:08.093567 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:47:08.093578 | orchestrator | 2025-08-29 14:47:08.093589 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:08.093607 | orchestrator | Friday 29 August 2025 14:47:08 +0000 (0:00:00.222) 0:00:14.953 ********* 2025-08-29 14:47:15.362401 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-08-29 14:47:15.362501 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-08-29 14:47:15.362515 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-08-29 14:47:15.362526 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-08-29 14:47:15.362536 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-08-29 14:47:15.362547 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-08-29 14:47:15.362558 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-08-29 14:47:15.362568 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-08-29 14:47:15.362579 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-08-29 14:47:15.362589 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-08-29 14:47:15.362600 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-08-29 14:47:15.362610 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-08-29 14:47:15.362621 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-08-29 14:47:15.362636 | orchestrator | 2025-08-29 14:47:15.362648 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:15.362660 | orchestrator | Friday 29 August 2025 14:47:08 +0000 (0:00:00.344) 0:00:15.298 ********* 2025-08-29 14:47:15.362671 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:15.362683 | orchestrator | 2025-08-29 14:47:15.362693 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:15.362704 | orchestrator | Friday 29 August 2025 14:47:08 +0000 (0:00:00.192) 0:00:15.490 ********* 2025-08-29 14:47:15.362715 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:15.362725 | orchestrator | 2025-08-29 14:47:15.362736 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:15.362746 | orchestrator | Friday 29 August 2025 14:47:08 +0000 (0:00:00.188) 0:00:15.679 ********* 2025-08-29 14:47:15.362757 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:15.362767 | orchestrator | 2025-08-29 14:47:15.362778 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:15.362789 | orchestrator | Friday 29 August 2025 14:47:08 +0000 (0:00:00.165) 0:00:15.844 ********* 2025-08-29 14:47:15.362799 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:15.362837 | orchestrator | 2025-08-29 14:47:15.362848 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:15.362859 | orchestrator | Friday 29 August 2025 14:47:09 +0000 (0:00:00.191) 0:00:16.036 ********* 2025-08-29 14:47:15.362869 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:15.362880 | orchestrator | 2025-08-29 14:47:15.362893 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:15.362906 | orchestrator | Friday 29 August 2025 14:47:09 +0000 (0:00:00.473) 0:00:16.509 ********* 2025-08-29 14:47:15.362918 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:15.362930 | orchestrator | 2025-08-29 14:47:15.362942 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:15.362954 | orchestrator | Friday 29 August 2025 14:47:09 +0000 (0:00:00.182) 0:00:16.691 ********* 2025-08-29 14:47:15.362966 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:15.362978 | orchestrator | 2025-08-29 14:47:15.363006 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:15.363019 | orchestrator | Friday 29 August 2025 14:47:10 +0000 (0:00:00.194) 0:00:16.886 ********* 2025-08-29 14:47:15.363030 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:15.363043 | orchestrator | 2025-08-29 14:47:15.363054 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:15.363067 | orchestrator | Friday 29 August 2025 14:47:10 +0000 (0:00:00.189) 0:00:17.076 ********* 2025-08-29 14:47:15.363079 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_a59956ee-14fc-4c64-8315-f5435014482a) 2025-08-29 14:47:15.363093 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_a59956ee-14fc-4c64-8315-f5435014482a) 2025-08-29 14:47:15.363105 | orchestrator | 2025-08-29 14:47:15.363117 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:15.363130 | orchestrator | Friday 29 August 2025 14:47:10 +0000 (0:00:00.364) 0:00:17.440 ********* 2025-08-29 14:47:15.363142 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_fa9350c4-64bc-4afb-b502-f801a6f70a24) 2025-08-29 14:47:15.363155 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_fa9350c4-64bc-4afb-b502-f801a6f70a24) 2025-08-29 14:47:15.363167 | orchestrator | 2025-08-29 14:47:15.363179 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:15.363191 | orchestrator | Friday 29 August 2025 14:47:10 +0000 (0:00:00.395) 0:00:17.835 ********* 2025-08-29 14:47:15.363203 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_ce9b0281-40bf-44ad-b4c6-e5b614e2c1c9) 2025-08-29 14:47:15.363215 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_ce9b0281-40bf-44ad-b4c6-e5b614e2c1c9) 2025-08-29 14:47:15.363228 | orchestrator | 2025-08-29 14:47:15.363240 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:15.363252 | orchestrator | Friday 29 August 2025 14:47:11 +0000 (0:00:00.522) 0:00:18.357 ********* 2025-08-29 14:47:15.363299 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d4d11aa1-e648-4125-bb7f-b16cf1114c9f) 2025-08-29 14:47:15.363312 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d4d11aa1-e648-4125-bb7f-b16cf1114c9f) 2025-08-29 14:47:15.363323 | orchestrator | 2025-08-29 14:47:15.363334 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:15.363345 | orchestrator | Friday 29 August 2025 14:47:12 +0000 (0:00:00.526) 0:00:18.884 ********* 2025-08-29 14:47:15.363356 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-08-29 14:47:15.363367 | orchestrator | 2025-08-29 14:47:15.363378 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:15.363388 | orchestrator | Friday 29 August 2025 14:47:12 +0000 (0:00:00.302) 0:00:19.186 ********* 2025-08-29 14:47:15.363399 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-08-29 14:47:15.363417 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-08-29 14:47:15.363428 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-08-29 14:47:15.363439 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-08-29 14:47:15.363449 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-08-29 14:47:15.363460 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-08-29 14:47:15.363471 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-08-29 14:47:15.363481 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-08-29 14:47:15.363492 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-08-29 14:47:15.363503 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-08-29 14:47:15.363513 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-08-29 14:47:15.363524 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-08-29 14:47:15.363534 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-08-29 14:47:15.363545 | orchestrator | 2025-08-29 14:47:15.363556 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:15.363566 | orchestrator | Friday 29 August 2025 14:47:12 +0000 (0:00:00.393) 0:00:19.580 ********* 2025-08-29 14:47:15.363577 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:15.363588 | orchestrator | 2025-08-29 14:47:15.363599 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:15.363609 | orchestrator | Friday 29 August 2025 14:47:12 +0000 (0:00:00.176) 0:00:19.756 ********* 2025-08-29 14:47:15.363620 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:15.363630 | orchestrator | 2025-08-29 14:47:15.363641 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:15.363652 | orchestrator | Friday 29 August 2025 14:47:13 +0000 (0:00:00.506) 0:00:20.263 ********* 2025-08-29 14:47:15.363668 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:15.363679 | orchestrator | 2025-08-29 14:47:15.363689 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:15.363700 | orchestrator | Friday 29 August 2025 14:47:13 +0000 (0:00:00.173) 0:00:20.437 ********* 2025-08-29 14:47:15.363711 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:15.363721 | orchestrator | 2025-08-29 14:47:15.363732 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:15.363743 | orchestrator | Friday 29 August 2025 14:47:13 +0000 (0:00:00.217) 0:00:20.654 ********* 2025-08-29 14:47:15.363754 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:15.363765 | orchestrator | 2025-08-29 14:47:15.363775 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:15.363786 | orchestrator | Friday 29 August 2025 14:47:13 +0000 (0:00:00.170) 0:00:20.825 ********* 2025-08-29 14:47:15.363797 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:15.363807 | orchestrator | 2025-08-29 14:47:15.363818 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:15.363829 | orchestrator | Friday 29 August 2025 14:47:14 +0000 (0:00:00.173) 0:00:20.998 ********* 2025-08-29 14:47:15.363840 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:15.363850 | orchestrator | 2025-08-29 14:47:15.363861 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:15.363872 | orchestrator | Friday 29 August 2025 14:47:14 +0000 (0:00:00.171) 0:00:21.170 ********* 2025-08-29 14:47:15.363882 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:15.363893 | orchestrator | 2025-08-29 14:47:15.363904 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:15.363922 | orchestrator | Friday 29 August 2025 14:47:14 +0000 (0:00:00.188) 0:00:21.358 ********* 2025-08-29 14:47:15.363933 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-08-29 14:47:15.363944 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-08-29 14:47:15.363955 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-08-29 14:47:15.363966 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-08-29 14:47:15.363977 | orchestrator | 2025-08-29 14:47:15.363988 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:15.363998 | orchestrator | Friday 29 August 2025 14:47:15 +0000 (0:00:00.660) 0:00:22.018 ********* 2025-08-29 14:47:15.364009 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:15.364020 | orchestrator | 2025-08-29 14:47:15.364037 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:21.369739 | orchestrator | Friday 29 August 2025 14:47:15 +0000 (0:00:00.207) 0:00:22.225 ********* 2025-08-29 14:47:21.369827 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:21.369842 | orchestrator | 2025-08-29 14:47:21.369854 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:21.369865 | orchestrator | Friday 29 August 2025 14:47:15 +0000 (0:00:00.181) 0:00:22.407 ********* 2025-08-29 14:47:21.369875 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:21.369886 | orchestrator | 2025-08-29 14:47:21.369896 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:21.369907 | orchestrator | Friday 29 August 2025 14:47:15 +0000 (0:00:00.176) 0:00:22.584 ********* 2025-08-29 14:47:21.369918 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:21.369928 | orchestrator | 2025-08-29 14:47:21.369939 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-08-29 14:47:21.369949 | orchestrator | Friday 29 August 2025 14:47:15 +0000 (0:00:00.184) 0:00:22.768 ********* 2025-08-29 14:47:21.369960 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-08-29 14:47:21.369970 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-08-29 14:47:21.369981 | orchestrator | 2025-08-29 14:47:21.369992 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-08-29 14:47:21.370002 | orchestrator | Friday 29 August 2025 14:47:16 +0000 (0:00:00.302) 0:00:23.070 ********* 2025-08-29 14:47:21.370012 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:21.370084 | orchestrator | 2025-08-29 14:47:21.370104 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-08-29 14:47:21.370123 | orchestrator | Friday 29 August 2025 14:47:16 +0000 (0:00:00.137) 0:00:23.208 ********* 2025-08-29 14:47:21.370138 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:21.370150 | orchestrator | 2025-08-29 14:47:21.370160 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-08-29 14:47:21.370171 | orchestrator | Friday 29 August 2025 14:47:16 +0000 (0:00:00.105) 0:00:23.313 ********* 2025-08-29 14:47:21.370187 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:21.370206 | orchestrator | 2025-08-29 14:47:21.370221 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-08-29 14:47:21.370232 | orchestrator | Friday 29 August 2025 14:47:16 +0000 (0:00:00.089) 0:00:23.403 ********* 2025-08-29 14:47:21.370243 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:47:21.370254 | orchestrator | 2025-08-29 14:47:21.370264 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-08-29 14:47:21.370334 | orchestrator | Friday 29 August 2025 14:47:16 +0000 (0:00:00.096) 0:00:23.499 ********* 2025-08-29 14:47:21.370350 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'cd5b7d9a-1dd4-5184-a319-6c247fab2039'}}) 2025-08-29 14:47:21.370364 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '95dc25c6-61fb-51c1-a723-34c7e57ec220'}}) 2025-08-29 14:47:21.370377 | orchestrator | 2025-08-29 14:47:21.370389 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-08-29 14:47:21.370428 | orchestrator | Friday 29 August 2025 14:47:16 +0000 (0:00:00.121) 0:00:23.621 ********* 2025-08-29 14:47:21.370453 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'cd5b7d9a-1dd4-5184-a319-6c247fab2039'}})  2025-08-29 14:47:21.370474 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '95dc25c6-61fb-51c1-a723-34c7e57ec220'}})  2025-08-29 14:47:21.370491 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:21.370506 | orchestrator | 2025-08-29 14:47:21.370520 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-08-29 14:47:21.370534 | orchestrator | Friday 29 August 2025 14:47:16 +0000 (0:00:00.115) 0:00:23.737 ********* 2025-08-29 14:47:21.370562 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'cd5b7d9a-1dd4-5184-a319-6c247fab2039'}})  2025-08-29 14:47:21.370577 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '95dc25c6-61fb-51c1-a723-34c7e57ec220'}})  2025-08-29 14:47:21.370590 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:21.370602 | orchestrator | 2025-08-29 14:47:21.370615 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-08-29 14:47:21.370628 | orchestrator | Friday 29 August 2025 14:47:17 +0000 (0:00:00.226) 0:00:23.963 ********* 2025-08-29 14:47:21.370642 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'cd5b7d9a-1dd4-5184-a319-6c247fab2039'}})  2025-08-29 14:47:21.370656 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '95dc25c6-61fb-51c1-a723-34c7e57ec220'}})  2025-08-29 14:47:21.370670 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:21.370701 | orchestrator | 2025-08-29 14:47:21.370728 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-08-29 14:47:21.370739 | orchestrator | Friday 29 August 2025 14:47:17 +0000 (0:00:00.143) 0:00:24.106 ********* 2025-08-29 14:47:21.370751 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:47:21.370763 | orchestrator | 2025-08-29 14:47:21.370774 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-08-29 14:47:21.370785 | orchestrator | Friday 29 August 2025 14:47:17 +0000 (0:00:00.131) 0:00:24.238 ********* 2025-08-29 14:47:21.370806 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:47:21.370817 | orchestrator | 2025-08-29 14:47:21.370828 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-08-29 14:47:21.370839 | orchestrator | Friday 29 August 2025 14:47:17 +0000 (0:00:00.132) 0:00:24.371 ********* 2025-08-29 14:47:21.370852 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:21.370862 | orchestrator | 2025-08-29 14:47:21.370903 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-08-29 14:47:21.370917 | orchestrator | Friday 29 August 2025 14:47:17 +0000 (0:00:00.116) 0:00:24.488 ********* 2025-08-29 14:47:21.370928 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:21.370939 | orchestrator | 2025-08-29 14:47:21.370965 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-08-29 14:47:21.370976 | orchestrator | Friday 29 August 2025 14:47:17 +0000 (0:00:00.254) 0:00:24.742 ********* 2025-08-29 14:47:21.371000 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:21.371012 | orchestrator | 2025-08-29 14:47:21.371023 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-08-29 14:47:21.371034 | orchestrator | Friday 29 August 2025 14:47:18 +0000 (0:00:00.131) 0:00:24.873 ********* 2025-08-29 14:47:21.371045 | orchestrator | ok: [testbed-node-4] => { 2025-08-29 14:47:21.371056 | orchestrator |  "ceph_osd_devices": { 2025-08-29 14:47:21.371068 | orchestrator |  "sdb": { 2025-08-29 14:47:21.371079 | orchestrator |  "osd_lvm_uuid": "cd5b7d9a-1dd4-5184-a319-6c247fab2039" 2025-08-29 14:47:21.371090 | orchestrator |  }, 2025-08-29 14:47:21.371101 | orchestrator |  "sdc": { 2025-08-29 14:47:21.371127 | orchestrator |  "osd_lvm_uuid": "95dc25c6-61fb-51c1-a723-34c7e57ec220" 2025-08-29 14:47:21.371139 | orchestrator |  } 2025-08-29 14:47:21.371150 | orchestrator |  } 2025-08-29 14:47:21.371162 | orchestrator | } 2025-08-29 14:47:21.371173 | orchestrator | 2025-08-29 14:47:21.371184 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-08-29 14:47:21.371194 | orchestrator | Friday 29 August 2025 14:47:18 +0000 (0:00:00.137) 0:00:25.010 ********* 2025-08-29 14:47:21.371206 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:21.371216 | orchestrator | 2025-08-29 14:47:21.371228 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-08-29 14:47:21.371249 | orchestrator | Friday 29 August 2025 14:47:18 +0000 (0:00:00.130) 0:00:25.141 ********* 2025-08-29 14:47:21.371261 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:21.371306 | orchestrator | 2025-08-29 14:47:21.371323 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-08-29 14:47:21.371334 | orchestrator | Friday 29 August 2025 14:47:18 +0000 (0:00:00.136) 0:00:25.278 ********* 2025-08-29 14:47:21.371345 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:47:21.371356 | orchestrator | 2025-08-29 14:47:21.371366 | orchestrator | TASK [Print configuration data] ************************************************ 2025-08-29 14:47:21.371389 | orchestrator | Friday 29 August 2025 14:47:18 +0000 (0:00:00.142) 0:00:25.421 ********* 2025-08-29 14:47:21.371409 | orchestrator | changed: [testbed-node-4] => { 2025-08-29 14:47:21.371443 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-08-29 14:47:21.371455 | orchestrator |  "ceph_osd_devices": { 2025-08-29 14:47:21.371466 | orchestrator |  "sdb": { 2025-08-29 14:47:21.371478 | orchestrator |  "osd_lvm_uuid": "cd5b7d9a-1dd4-5184-a319-6c247fab2039" 2025-08-29 14:47:21.371489 | orchestrator |  }, 2025-08-29 14:47:21.371500 | orchestrator |  "sdc": { 2025-08-29 14:47:21.371511 | orchestrator |  "osd_lvm_uuid": "95dc25c6-61fb-51c1-a723-34c7e57ec220" 2025-08-29 14:47:21.371525 | orchestrator |  } 2025-08-29 14:47:21.371544 | orchestrator |  }, 2025-08-29 14:47:21.371564 | orchestrator |  "lvm_volumes": [ 2025-08-29 14:47:21.371575 | orchestrator |  { 2025-08-29 14:47:21.371591 | orchestrator |  "data": "osd-block-cd5b7d9a-1dd4-5184-a319-6c247fab2039", 2025-08-29 14:47:21.371611 | orchestrator |  "data_vg": "ceph-cd5b7d9a-1dd4-5184-a319-6c247fab2039" 2025-08-29 14:47:21.371626 | orchestrator |  }, 2025-08-29 14:47:21.371637 | orchestrator |  { 2025-08-29 14:47:21.371648 | orchestrator |  "data": "osd-block-95dc25c6-61fb-51c1-a723-34c7e57ec220", 2025-08-29 14:47:21.371659 | orchestrator |  "data_vg": "ceph-95dc25c6-61fb-51c1-a723-34c7e57ec220" 2025-08-29 14:47:21.371670 | orchestrator |  } 2025-08-29 14:47:21.371681 | orchestrator |  ] 2025-08-29 14:47:21.371692 | orchestrator |  } 2025-08-29 14:47:21.371703 | orchestrator | } 2025-08-29 14:47:21.371714 | orchestrator | 2025-08-29 14:47:21.371725 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-08-29 14:47:21.371736 | orchestrator | Friday 29 August 2025 14:47:18 +0000 (0:00:00.184) 0:00:25.605 ********* 2025-08-29 14:47:21.371747 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-08-29 14:47:21.371757 | orchestrator | 2025-08-29 14:47:21.371768 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-08-29 14:47:21.371779 | orchestrator | 2025-08-29 14:47:21.371789 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-08-29 14:47:21.371800 | orchestrator | Friday 29 August 2025 14:47:19 +0000 (0:00:00.941) 0:00:26.546 ********* 2025-08-29 14:47:21.371810 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-08-29 14:47:21.371821 | orchestrator | 2025-08-29 14:47:21.371832 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-08-29 14:47:21.371848 | orchestrator | Friday 29 August 2025 14:47:20 +0000 (0:00:00.419) 0:00:26.965 ********* 2025-08-29 14:47:21.371880 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:47:21.371892 | orchestrator | 2025-08-29 14:47:21.371903 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:21.371914 | orchestrator | Friday 29 August 2025 14:47:20 +0000 (0:00:00.804) 0:00:27.770 ********* 2025-08-29 14:47:21.371933 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-08-29 14:47:21.371944 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-08-29 14:47:21.371963 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-08-29 14:47:21.371982 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-08-29 14:47:21.371994 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-08-29 14:47:21.372004 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-08-29 14:47:21.372024 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-08-29 14:47:30.188019 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-08-29 14:47:30.188120 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-08-29 14:47:30.188132 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-08-29 14:47:30.188142 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-08-29 14:47:30.188152 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-08-29 14:47:30.188161 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-08-29 14:47:30.188171 | orchestrator | 2025-08-29 14:47:30.188181 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:30.188192 | orchestrator | Friday 29 August 2025 14:47:21 +0000 (0:00:00.454) 0:00:28.224 ********* 2025-08-29 14:47:30.188202 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:47:30.188213 | orchestrator | 2025-08-29 14:47:30.188222 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:30.188232 | orchestrator | Friday 29 August 2025 14:47:21 +0000 (0:00:00.272) 0:00:28.496 ********* 2025-08-29 14:47:30.188241 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:47:30.188250 | orchestrator | 2025-08-29 14:47:30.188260 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:30.188269 | orchestrator | Friday 29 August 2025 14:47:21 +0000 (0:00:00.225) 0:00:28.722 ********* 2025-08-29 14:47:30.188337 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:47:30.188348 | orchestrator | 2025-08-29 14:47:30.188358 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:30.188367 | orchestrator | Friday 29 August 2025 14:47:22 +0000 (0:00:00.215) 0:00:28.937 ********* 2025-08-29 14:47:30.188377 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:47:30.188386 | orchestrator | 2025-08-29 14:47:30.188396 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:30.188405 | orchestrator | Friday 29 August 2025 14:47:22 +0000 (0:00:00.222) 0:00:29.160 ********* 2025-08-29 14:47:30.188414 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:47:30.188424 | orchestrator | 2025-08-29 14:47:30.188433 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:30.188442 | orchestrator | Friday 29 August 2025 14:47:22 +0000 (0:00:00.250) 0:00:29.410 ********* 2025-08-29 14:47:30.188452 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:47:30.188461 | orchestrator | 2025-08-29 14:47:30.188470 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:30.188480 | orchestrator | Friday 29 August 2025 14:47:22 +0000 (0:00:00.218) 0:00:29.629 ********* 2025-08-29 14:47:30.188490 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:47:30.188524 | orchestrator | 2025-08-29 14:47:30.188534 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:30.188544 | orchestrator | Friday 29 August 2025 14:47:22 +0000 (0:00:00.213) 0:00:29.842 ********* 2025-08-29 14:47:30.188554 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:47:30.188565 | orchestrator | 2025-08-29 14:47:30.188576 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:30.188586 | orchestrator | Friday 29 August 2025 14:47:23 +0000 (0:00:00.205) 0:00:30.048 ********* 2025-08-29 14:47:30.188598 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_eca37a69-3c0f-4357-9670-f9669d9e69b8) 2025-08-29 14:47:30.188610 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_eca37a69-3c0f-4357-9670-f9669d9e69b8) 2025-08-29 14:47:30.188621 | orchestrator | 2025-08-29 14:47:30.188632 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:30.188643 | orchestrator | Friday 29 August 2025 14:47:23 +0000 (0:00:00.716) 0:00:30.764 ********* 2025-08-29 14:47:30.188654 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_00b08f76-6c14-40db-8d96-1843b494176b) 2025-08-29 14:47:30.188665 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_00b08f76-6c14-40db-8d96-1843b494176b) 2025-08-29 14:47:30.188676 | orchestrator | 2025-08-29 14:47:30.188687 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:30.188698 | orchestrator | Friday 29 August 2025 14:47:24 +0000 (0:00:00.909) 0:00:31.674 ********* 2025-08-29 14:47:30.188709 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_54964cbc-4c5d-4365-aa24-d13bcc6e495a) 2025-08-29 14:47:30.188720 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_54964cbc-4c5d-4365-aa24-d13bcc6e495a) 2025-08-29 14:47:30.188731 | orchestrator | 2025-08-29 14:47:30.188742 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:30.188753 | orchestrator | Friday 29 August 2025 14:47:25 +0000 (0:00:00.506) 0:00:32.181 ********* 2025-08-29 14:47:30.188763 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_8d4c2d77-38a8-4e70-8dcf-48e237e577e8) 2025-08-29 14:47:30.188775 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_8d4c2d77-38a8-4e70-8dcf-48e237e577e8) 2025-08-29 14:47:30.188785 | orchestrator | 2025-08-29 14:47:30.188796 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:47:30.188808 | orchestrator | Friday 29 August 2025 14:47:25 +0000 (0:00:00.427) 0:00:32.608 ********* 2025-08-29 14:47:30.188819 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-08-29 14:47:30.188830 | orchestrator | 2025-08-29 14:47:30.188840 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:30.188851 | orchestrator | Friday 29 August 2025 14:47:26 +0000 (0:00:00.326) 0:00:32.935 ********* 2025-08-29 14:47:30.188878 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-08-29 14:47:30.188889 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-08-29 14:47:30.188900 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-08-29 14:47:30.188910 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-08-29 14:47:30.188921 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-08-29 14:47:30.188930 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-08-29 14:47:30.188939 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-08-29 14:47:30.188949 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-08-29 14:47:30.188959 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-08-29 14:47:30.188993 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-08-29 14:47:30.189003 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-08-29 14:47:30.189013 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-08-29 14:47:30.189023 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-08-29 14:47:30.189032 | orchestrator | 2025-08-29 14:47:30.189042 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:30.189051 | orchestrator | Friday 29 August 2025 14:47:26 +0000 (0:00:00.379) 0:00:33.315 ********* 2025-08-29 14:47:30.189061 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:47:30.189070 | orchestrator | 2025-08-29 14:47:30.189080 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:30.189090 | orchestrator | Friday 29 August 2025 14:47:26 +0000 (0:00:00.202) 0:00:33.518 ********* 2025-08-29 14:47:30.189099 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:47:30.189108 | orchestrator | 2025-08-29 14:47:30.189118 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:30.189127 | orchestrator | Friday 29 August 2025 14:47:26 +0000 (0:00:00.189) 0:00:33.707 ********* 2025-08-29 14:47:30.189137 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:47:30.189146 | orchestrator | 2025-08-29 14:47:30.189160 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:30.189169 | orchestrator | Friday 29 August 2025 14:47:27 +0000 (0:00:00.228) 0:00:33.935 ********* 2025-08-29 14:47:30.189179 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:47:30.189188 | orchestrator | 2025-08-29 14:47:30.189198 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:30.189207 | orchestrator | Friday 29 August 2025 14:47:27 +0000 (0:00:00.278) 0:00:34.214 ********* 2025-08-29 14:47:30.189217 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:47:30.189227 | orchestrator | 2025-08-29 14:47:30.189236 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:30.189245 | orchestrator | Friday 29 August 2025 14:47:27 +0000 (0:00:00.204) 0:00:34.418 ********* 2025-08-29 14:47:30.189255 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:47:30.189264 | orchestrator | 2025-08-29 14:47:30.189274 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:30.189304 | orchestrator | Friday 29 August 2025 14:47:28 +0000 (0:00:00.716) 0:00:35.134 ********* 2025-08-29 14:47:30.189314 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:47:30.189324 | orchestrator | 2025-08-29 14:47:30.189333 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:30.189343 | orchestrator | Friday 29 August 2025 14:47:28 +0000 (0:00:00.220) 0:00:35.355 ********* 2025-08-29 14:47:30.189352 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:47:30.189362 | orchestrator | 2025-08-29 14:47:30.189372 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:30.189381 | orchestrator | Friday 29 August 2025 14:47:28 +0000 (0:00:00.193) 0:00:35.548 ********* 2025-08-29 14:47:30.189391 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-08-29 14:47:30.189401 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-08-29 14:47:30.189411 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-08-29 14:47:30.189420 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-08-29 14:47:30.189430 | orchestrator | 2025-08-29 14:47:30.189439 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:30.189449 | orchestrator | Friday 29 August 2025 14:47:29 +0000 (0:00:00.671) 0:00:36.220 ********* 2025-08-29 14:47:30.189459 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:47:30.189468 | orchestrator | 2025-08-29 14:47:30.189478 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:30.189494 | orchestrator | Friday 29 August 2025 14:47:29 +0000 (0:00:00.213) 0:00:36.434 ********* 2025-08-29 14:47:30.189503 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:47:30.189513 | orchestrator | 2025-08-29 14:47:30.189523 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:30.189532 | orchestrator | Friday 29 August 2025 14:47:29 +0000 (0:00:00.194) 0:00:36.628 ********* 2025-08-29 14:47:30.189542 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:47:30.189551 | orchestrator | 2025-08-29 14:47:30.189561 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:47:30.189570 | orchestrator | Friday 29 August 2025 14:47:29 +0000 (0:00:00.209) 0:00:36.838 ********* 2025-08-29 14:47:30.189580 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:47:30.189589 | orchestrator | 2025-08-29 14:47:30.189599 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-08-29 14:47:30.189614 | orchestrator | Friday 29 August 2025 14:47:30 +0000 (0:00:00.210) 0:00:37.049 ********* 2025-08-29 14:47:35.399967 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-08-29 14:47:35.400019 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-08-29 14:47:35.400024 | orchestrator | 2025-08-29 14:47:35.400029 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-08-29 14:47:35.400033 | orchestrator | Friday 29 August 2025 14:47:30 +0000 (0:00:00.266) 0:00:37.315 ********* 2025-08-29 14:47:35.400037 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:47:35.400041 | orchestrator | 2025-08-29 14:47:35.400045 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-08-29 14:47:35.400049 | orchestrator | Friday 29 August 2025 14:47:30 +0000 (0:00:00.156) 0:00:37.472 ********* 2025-08-29 14:47:35.400053 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:47:35.400057 | orchestrator | 2025-08-29 14:47:35.400061 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-08-29 14:47:35.400064 | orchestrator | Friday 29 August 2025 14:47:30 +0000 (0:00:00.154) 0:00:37.627 ********* 2025-08-29 14:47:35.400068 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:47:35.400072 | orchestrator | 2025-08-29 14:47:35.400076 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-08-29 14:47:35.400079 | orchestrator | Friday 29 August 2025 14:47:30 +0000 (0:00:00.207) 0:00:37.835 ********* 2025-08-29 14:47:35.400083 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:47:35.400087 | orchestrator | 2025-08-29 14:47:35.400091 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-08-29 14:47:35.400095 | orchestrator | Friday 29 August 2025 14:47:31 +0000 (0:00:00.587) 0:00:38.423 ********* 2025-08-29 14:47:35.400100 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ea955146-254c-5a5a-83ec-c4f4ca16d6a1'}}) 2025-08-29 14:47:35.400104 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'aeb09036-0b6a-534a-a94a-678fcf7bc5df'}}) 2025-08-29 14:47:35.400107 | orchestrator | 2025-08-29 14:47:35.400111 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-08-29 14:47:35.400115 | orchestrator | Friday 29 August 2025 14:47:31 +0000 (0:00:00.255) 0:00:38.679 ********* 2025-08-29 14:47:35.400119 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ea955146-254c-5a5a-83ec-c4f4ca16d6a1'}})  2025-08-29 14:47:35.400124 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'aeb09036-0b6a-534a-a94a-678fcf7bc5df'}})  2025-08-29 14:47:35.400128 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:47:35.400132 | orchestrator | 2025-08-29 14:47:35.400136 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-08-29 14:47:35.400140 | orchestrator | Friday 29 August 2025 14:47:31 +0000 (0:00:00.175) 0:00:38.854 ********* 2025-08-29 14:47:35.400143 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ea955146-254c-5a5a-83ec-c4f4ca16d6a1'}})  2025-08-29 14:47:35.400159 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'aeb09036-0b6a-534a-a94a-678fcf7bc5df'}})  2025-08-29 14:47:35.400163 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:47:35.400167 | orchestrator | 2025-08-29 14:47:35.400171 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-08-29 14:47:35.400175 | orchestrator | Friday 29 August 2025 14:47:32 +0000 (0:00:00.162) 0:00:39.016 ********* 2025-08-29 14:47:35.400179 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ea955146-254c-5a5a-83ec-c4f4ca16d6a1'}})  2025-08-29 14:47:35.400183 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'aeb09036-0b6a-534a-a94a-678fcf7bc5df'}})  2025-08-29 14:47:35.400186 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:47:35.400190 | orchestrator | 2025-08-29 14:47:35.400194 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-08-29 14:47:35.400198 | orchestrator | Friday 29 August 2025 14:47:32 +0000 (0:00:00.161) 0:00:39.177 ********* 2025-08-29 14:47:35.400201 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:47:35.400205 | orchestrator | 2025-08-29 14:47:35.400222 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-08-29 14:47:35.400226 | orchestrator | Friday 29 August 2025 14:47:32 +0000 (0:00:00.180) 0:00:39.358 ********* 2025-08-29 14:47:35.400229 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:47:35.400233 | orchestrator | 2025-08-29 14:47:35.400237 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-08-29 14:47:35.400241 | orchestrator | Friday 29 August 2025 14:47:32 +0000 (0:00:00.165) 0:00:39.524 ********* 2025-08-29 14:47:35.400244 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:47:35.400248 | orchestrator | 2025-08-29 14:47:35.400252 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-08-29 14:47:35.400256 | orchestrator | Friday 29 August 2025 14:47:32 +0000 (0:00:00.139) 0:00:39.663 ********* 2025-08-29 14:47:35.400259 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:47:35.400263 | orchestrator | 2025-08-29 14:47:35.400267 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-08-29 14:47:35.400271 | orchestrator | Friday 29 August 2025 14:47:32 +0000 (0:00:00.142) 0:00:39.806 ********* 2025-08-29 14:47:35.400274 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:47:35.400278 | orchestrator | 2025-08-29 14:47:35.400295 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-08-29 14:47:35.400299 | orchestrator | Friday 29 August 2025 14:47:33 +0000 (0:00:00.213) 0:00:40.019 ********* 2025-08-29 14:47:35.400303 | orchestrator | ok: [testbed-node-5] => { 2025-08-29 14:47:35.400307 | orchestrator |  "ceph_osd_devices": { 2025-08-29 14:47:35.400310 | orchestrator |  "sdb": { 2025-08-29 14:47:35.400315 | orchestrator |  "osd_lvm_uuid": "ea955146-254c-5a5a-83ec-c4f4ca16d6a1" 2025-08-29 14:47:35.400326 | orchestrator |  }, 2025-08-29 14:47:35.400330 | orchestrator |  "sdc": { 2025-08-29 14:47:35.400334 | orchestrator |  "osd_lvm_uuid": "aeb09036-0b6a-534a-a94a-678fcf7bc5df" 2025-08-29 14:47:35.400337 | orchestrator |  } 2025-08-29 14:47:35.400341 | orchestrator |  } 2025-08-29 14:47:35.400345 | orchestrator | } 2025-08-29 14:47:35.400349 | orchestrator | 2025-08-29 14:47:35.400353 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-08-29 14:47:35.400357 | orchestrator | Friday 29 August 2025 14:47:33 +0000 (0:00:00.163) 0:00:40.183 ********* 2025-08-29 14:47:35.400360 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:47:35.400364 | orchestrator | 2025-08-29 14:47:35.400368 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-08-29 14:47:35.400372 | orchestrator | Friday 29 August 2025 14:47:33 +0000 (0:00:00.131) 0:00:40.315 ********* 2025-08-29 14:47:35.400375 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:47:35.400379 | orchestrator | 2025-08-29 14:47:35.400383 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-08-29 14:47:35.400390 | orchestrator | Friday 29 August 2025 14:47:33 +0000 (0:00:00.399) 0:00:40.715 ********* 2025-08-29 14:47:35.400394 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:47:35.400397 | orchestrator | 2025-08-29 14:47:35.400401 | orchestrator | TASK [Print configuration data] ************************************************ 2025-08-29 14:47:35.400405 | orchestrator | Friday 29 August 2025 14:47:34 +0000 (0:00:00.156) 0:00:40.871 ********* 2025-08-29 14:47:35.400409 | orchestrator | changed: [testbed-node-5] => { 2025-08-29 14:47:35.400412 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-08-29 14:47:35.400416 | orchestrator |  "ceph_osd_devices": { 2025-08-29 14:47:35.400420 | orchestrator |  "sdb": { 2025-08-29 14:47:35.400424 | orchestrator |  "osd_lvm_uuid": "ea955146-254c-5a5a-83ec-c4f4ca16d6a1" 2025-08-29 14:47:35.400428 | orchestrator |  }, 2025-08-29 14:47:35.400432 | orchestrator |  "sdc": { 2025-08-29 14:47:35.400435 | orchestrator |  "osd_lvm_uuid": "aeb09036-0b6a-534a-a94a-678fcf7bc5df" 2025-08-29 14:47:35.400439 | orchestrator |  } 2025-08-29 14:47:35.400443 | orchestrator |  }, 2025-08-29 14:47:35.400447 | orchestrator |  "lvm_volumes": [ 2025-08-29 14:47:35.400451 | orchestrator |  { 2025-08-29 14:47:35.400454 | orchestrator |  "data": "osd-block-ea955146-254c-5a5a-83ec-c4f4ca16d6a1", 2025-08-29 14:47:35.400458 | orchestrator |  "data_vg": "ceph-ea955146-254c-5a5a-83ec-c4f4ca16d6a1" 2025-08-29 14:47:35.400462 | orchestrator |  }, 2025-08-29 14:47:35.400466 | orchestrator |  { 2025-08-29 14:47:35.400470 | orchestrator |  "data": "osd-block-aeb09036-0b6a-534a-a94a-678fcf7bc5df", 2025-08-29 14:47:35.400473 | orchestrator |  "data_vg": "ceph-aeb09036-0b6a-534a-a94a-678fcf7bc5df" 2025-08-29 14:47:35.400477 | orchestrator |  } 2025-08-29 14:47:35.400481 | orchestrator |  ] 2025-08-29 14:47:35.400485 | orchestrator |  } 2025-08-29 14:47:35.400490 | orchestrator | } 2025-08-29 14:47:35.400494 | orchestrator | 2025-08-29 14:47:35.400498 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-08-29 14:47:35.400501 | orchestrator | Friday 29 August 2025 14:47:34 +0000 (0:00:00.248) 0:00:41.120 ********* 2025-08-29 14:47:35.400505 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-08-29 14:47:35.400509 | orchestrator | 2025-08-29 14:47:35.400513 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:47:35.400516 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-08-29 14:47:35.400521 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-08-29 14:47:35.400525 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-08-29 14:47:35.400528 | orchestrator | 2025-08-29 14:47:35.400532 | orchestrator | 2025-08-29 14:47:35.400536 | orchestrator | 2025-08-29 14:47:35.400540 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:47:35.400543 | orchestrator | Friday 29 August 2025 14:47:35 +0000 (0:00:01.111) 0:00:42.232 ********* 2025-08-29 14:47:35.400547 | orchestrator | =============================================================================== 2025-08-29 14:47:35.400551 | orchestrator | Write configuration file ------------------------------------------------ 3.96s 2025-08-29 14:47:35.400555 | orchestrator | Get initial list of available block devices ----------------------------- 1.32s 2025-08-29 14:47:35.400559 | orchestrator | Add known links to the list of available block devices ------------------ 1.17s 2025-08-29 14:47:35.400562 | orchestrator | Add known partitions to the list of available block devices ------------- 1.15s 2025-08-29 14:47:35.400566 | orchestrator | Add known partitions to the list of available block devices ------------- 1.00s 2025-08-29 14:47:35.400572 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.93s 2025-08-29 14:47:35.400576 | orchestrator | Add known links to the list of available block devices ------------------ 0.91s 2025-08-29 14:47:35.400580 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.81s 2025-08-29 14:47:35.400584 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.73s 2025-08-29 14:47:35.400587 | orchestrator | Print configuration data ------------------------------------------------ 0.72s 2025-08-29 14:47:35.400591 | orchestrator | Add known links to the list of available block devices ------------------ 0.72s 2025-08-29 14:47:35.400595 | orchestrator | Add known partitions to the list of available block devices ------------- 0.72s 2025-08-29 14:47:35.400598 | orchestrator | Add known links to the list of available block devices ------------------ 0.69s 2025-08-29 14:47:35.400602 | orchestrator | Print DB devices -------------------------------------------------------- 0.69s 2025-08-29 14:47:35.400608 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.67s 2025-08-29 14:47:35.754900 | orchestrator | Add known partitions to the list of available block devices ------------- 0.67s 2025-08-29 14:47:35.754978 | orchestrator | Add known partitions to the list of available block devices ------------- 0.66s 2025-08-29 14:47:35.754993 | orchestrator | Add known links to the list of available block devices ------------------ 0.58s 2025-08-29 14:47:35.755005 | orchestrator | Add known links to the list of available block devices ------------------ 0.55s 2025-08-29 14:47:35.755018 | orchestrator | Add known links to the list of available block devices ------------------ 0.55s 2025-08-29 14:47:58.771735 | orchestrator | 2025-08-29 14:47:58 | INFO  | Task 48ac058d-7c98-4eec-a107-f301b0ed5d43 (sync inventory) is running in background. Output coming soon. 2025-08-29 14:48:26.543975 | orchestrator | 2025-08-29 14:48:00 | INFO  | Starting group_vars file reorganization 2025-08-29 14:48:26.544090 | orchestrator | 2025-08-29 14:48:00 | INFO  | Moved 0 file(s) to their respective directories 2025-08-29 14:48:26.544106 | orchestrator | 2025-08-29 14:48:00 | INFO  | Group_vars file reorganization completed 2025-08-29 14:48:26.544118 | orchestrator | 2025-08-29 14:48:03 | INFO  | Starting variable preparation from inventory 2025-08-29 14:48:26.544130 | orchestrator | 2025-08-29 14:48:06 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-08-29 14:48:26.544293 | orchestrator | 2025-08-29 14:48:06 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-08-29 14:48:26.544426 | orchestrator | 2025-08-29 14:48:06 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-08-29 14:48:26.544442 | orchestrator | 2025-08-29 14:48:06 | INFO  | 3 file(s) written, 6 host(s) processed 2025-08-29 14:48:26.544453 | orchestrator | 2025-08-29 14:48:06 | INFO  | Variable preparation completed 2025-08-29 14:48:26.544464 | orchestrator | 2025-08-29 14:48:08 | INFO  | Starting inventory overwrite handling 2025-08-29 14:48:26.544475 | orchestrator | 2025-08-29 14:48:08 | INFO  | Handling group overwrites in 99-overwrite 2025-08-29 14:48:26.544510 | orchestrator | 2025-08-29 14:48:08 | INFO  | Removing group frr:children from 60-generic 2025-08-29 14:48:26.544522 | orchestrator | 2025-08-29 14:48:08 | INFO  | Removing group storage:children from 50-kolla 2025-08-29 14:48:26.544536 | orchestrator | 2025-08-29 14:48:08 | INFO  | Removing group netbird:children from 50-infrastruture 2025-08-29 14:48:26.544547 | orchestrator | 2025-08-29 14:48:08 | INFO  | Removing group ceph-rgw from 50-ceph 2025-08-29 14:48:26.544560 | orchestrator | 2025-08-29 14:48:08 | INFO  | Removing group ceph-mds from 50-ceph 2025-08-29 14:48:26.544572 | orchestrator | 2025-08-29 14:48:08 | INFO  | Handling group overwrites in 20-roles 2025-08-29 14:48:26.544584 | orchestrator | 2025-08-29 14:48:08 | INFO  | Removing group k3s_node from 50-infrastruture 2025-08-29 14:48:26.544619 | orchestrator | 2025-08-29 14:48:08 | INFO  | Removed 6 group(s) in total 2025-08-29 14:48:26.544632 | orchestrator | 2025-08-29 14:48:08 | INFO  | Inventory overwrite handling completed 2025-08-29 14:48:26.544644 | orchestrator | 2025-08-29 14:48:09 | INFO  | Starting merge of inventory files 2025-08-29 14:48:26.544656 | orchestrator | 2025-08-29 14:48:09 | INFO  | Inventory files merged successfully 2025-08-29 14:48:26.544668 | orchestrator | 2025-08-29 14:48:13 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-08-29 14:48:26.544680 | orchestrator | 2025-08-29 14:48:25 | INFO  | Successfully wrote ClusterShell configuration 2025-08-29 14:48:26.544692 | orchestrator | [master 085be09] 2025-08-29-14-48 2025-08-29 14:48:26.544705 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2025-08-29 14:48:28.850484 | orchestrator | 2025-08-29 14:48:28 | INFO  | Task 9dd9b36b-247d-45db-aba4-fb208c2314d9 (ceph-create-lvm-devices) was prepared for execution. 2025-08-29 14:48:28.851433 | orchestrator | 2025-08-29 14:48:28 | INFO  | It takes a moment until task 9dd9b36b-247d-45db-aba4-fb208c2314d9 (ceph-create-lvm-devices) has been started and output is visible here. 2025-08-29 14:48:41.280747 | orchestrator | 2025-08-29 14:48:41.280850 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-08-29 14:48:41.280866 | orchestrator | 2025-08-29 14:48:41.280878 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-08-29 14:48:41.280889 | orchestrator | Friday 29 August 2025 14:48:33 +0000 (0:00:00.314) 0:00:00.314 ********* 2025-08-29 14:48:41.280900 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-08-29 14:48:41.280912 | orchestrator | 2025-08-29 14:48:41.280923 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-08-29 14:48:41.280933 | orchestrator | Friday 29 August 2025 14:48:33 +0000 (0:00:00.252) 0:00:00.566 ********* 2025-08-29 14:48:41.280945 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:48:41.280956 | orchestrator | 2025-08-29 14:48:41.280967 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:48:41.280978 | orchestrator | Friday 29 August 2025 14:48:33 +0000 (0:00:00.243) 0:00:00.810 ********* 2025-08-29 14:48:41.280988 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-08-29 14:48:41.281001 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-08-29 14:48:41.281012 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-08-29 14:48:41.281022 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-08-29 14:48:41.281033 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-08-29 14:48:41.281044 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-08-29 14:48:41.281055 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-08-29 14:48:41.281065 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-08-29 14:48:41.281076 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-08-29 14:48:41.281087 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-08-29 14:48:41.281097 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-08-29 14:48:41.281108 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-08-29 14:48:41.281119 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-08-29 14:48:41.281130 | orchestrator | 2025-08-29 14:48:41.281141 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:48:41.281175 | orchestrator | Friday 29 August 2025 14:48:34 +0000 (0:00:00.436) 0:00:01.246 ********* 2025-08-29 14:48:41.281187 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:48:41.281198 | orchestrator | 2025-08-29 14:48:41.281209 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:48:41.281220 | orchestrator | Friday 29 August 2025 14:48:34 +0000 (0:00:00.477) 0:00:01.724 ********* 2025-08-29 14:48:41.281231 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:48:41.281242 | orchestrator | 2025-08-29 14:48:41.281252 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:48:41.281263 | orchestrator | Friday 29 August 2025 14:48:34 +0000 (0:00:00.217) 0:00:01.942 ********* 2025-08-29 14:48:41.281274 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:48:41.281284 | orchestrator | 2025-08-29 14:48:41.281296 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:48:41.281341 | orchestrator | Friday 29 August 2025 14:48:35 +0000 (0:00:00.234) 0:00:02.176 ********* 2025-08-29 14:48:41.281353 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:48:41.281365 | orchestrator | 2025-08-29 14:48:41.281377 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:48:41.281389 | orchestrator | Friday 29 August 2025 14:48:35 +0000 (0:00:00.213) 0:00:02.390 ********* 2025-08-29 14:48:41.281401 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:48:41.281413 | orchestrator | 2025-08-29 14:48:41.281425 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:48:41.281438 | orchestrator | Friday 29 August 2025 14:48:35 +0000 (0:00:00.205) 0:00:02.595 ********* 2025-08-29 14:48:41.281450 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:48:41.281461 | orchestrator | 2025-08-29 14:48:41.281474 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:48:41.281486 | orchestrator | Friday 29 August 2025 14:48:35 +0000 (0:00:00.235) 0:00:02.831 ********* 2025-08-29 14:48:41.281498 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:48:41.281510 | orchestrator | 2025-08-29 14:48:41.281522 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:48:41.281534 | orchestrator | Friday 29 August 2025 14:48:36 +0000 (0:00:00.201) 0:00:03.032 ********* 2025-08-29 14:48:41.281547 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:48:41.281558 | orchestrator | 2025-08-29 14:48:41.281571 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:48:41.281583 | orchestrator | Friday 29 August 2025 14:48:36 +0000 (0:00:00.214) 0:00:03.247 ********* 2025-08-29 14:48:41.281595 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_12058b5d-7e0f-4769-b570-e8724a20121a) 2025-08-29 14:48:41.281609 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_12058b5d-7e0f-4769-b570-e8724a20121a) 2025-08-29 14:48:41.281620 | orchestrator | 2025-08-29 14:48:41.281631 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:48:41.281641 | orchestrator | Friday 29 August 2025 14:48:36 +0000 (0:00:00.430) 0:00:03.678 ********* 2025-08-29 14:48:41.281668 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_8e840163-cd15-4bab-ac0d-7731db5a26c7) 2025-08-29 14:48:41.281680 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_8e840163-cd15-4bab-ac0d-7731db5a26c7) 2025-08-29 14:48:41.281691 | orchestrator | 2025-08-29 14:48:41.281702 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:48:41.281712 | orchestrator | Friday 29 August 2025 14:48:37 +0000 (0:00:00.524) 0:00:04.202 ********* 2025-08-29 14:48:41.281723 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b50f501b-7dcc-49bb-af34-bcea70be6a61) 2025-08-29 14:48:41.281734 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b50f501b-7dcc-49bb-af34-bcea70be6a61) 2025-08-29 14:48:41.281745 | orchestrator | 2025-08-29 14:48:41.281755 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:48:41.281774 | orchestrator | Friday 29 August 2025 14:48:37 +0000 (0:00:00.676) 0:00:04.879 ********* 2025-08-29 14:48:41.281785 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_34b7b0aa-9c3f-4af7-b9a4-6261675e7012) 2025-08-29 14:48:41.281796 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_34b7b0aa-9c3f-4af7-b9a4-6261675e7012) 2025-08-29 14:48:41.281807 | orchestrator | 2025-08-29 14:48:41.281817 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:48:41.281828 | orchestrator | Friday 29 August 2025 14:48:38 +0000 (0:00:00.906) 0:00:05.786 ********* 2025-08-29 14:48:41.281839 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-08-29 14:48:41.281850 | orchestrator | 2025-08-29 14:48:41.281861 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:48:41.281871 | orchestrator | Friday 29 August 2025 14:48:39 +0000 (0:00:00.340) 0:00:06.126 ********* 2025-08-29 14:48:41.281882 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-08-29 14:48:41.281893 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-08-29 14:48:41.281903 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-08-29 14:48:41.281914 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-08-29 14:48:41.281925 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-08-29 14:48:41.281935 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-08-29 14:48:41.281946 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-08-29 14:48:41.281957 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-08-29 14:48:41.281982 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-08-29 14:48:41.281993 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-08-29 14:48:41.282004 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-08-29 14:48:41.282015 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-08-29 14:48:41.282098 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-08-29 14:48:41.282111 | orchestrator | 2025-08-29 14:48:41.282122 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:48:41.282133 | orchestrator | Friday 29 August 2025 14:48:39 +0000 (0:00:00.433) 0:00:06.559 ********* 2025-08-29 14:48:41.282144 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:48:41.282155 | orchestrator | 2025-08-29 14:48:41.282166 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:48:41.282177 | orchestrator | Friday 29 August 2025 14:48:39 +0000 (0:00:00.207) 0:00:06.767 ********* 2025-08-29 14:48:41.282188 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:48:41.282198 | orchestrator | 2025-08-29 14:48:41.282209 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:48:41.282220 | orchestrator | Friday 29 August 2025 14:48:40 +0000 (0:00:00.251) 0:00:07.019 ********* 2025-08-29 14:48:41.282231 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:48:41.282242 | orchestrator | 2025-08-29 14:48:41.282253 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:48:41.282264 | orchestrator | Friday 29 August 2025 14:48:40 +0000 (0:00:00.229) 0:00:07.249 ********* 2025-08-29 14:48:41.282275 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:48:41.282285 | orchestrator | 2025-08-29 14:48:41.282296 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:48:41.282334 | orchestrator | Friday 29 August 2025 14:48:40 +0000 (0:00:00.211) 0:00:07.460 ********* 2025-08-29 14:48:41.282346 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:48:41.282357 | orchestrator | 2025-08-29 14:48:41.282368 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:48:41.282379 | orchestrator | Friday 29 August 2025 14:48:40 +0000 (0:00:00.211) 0:00:07.671 ********* 2025-08-29 14:48:41.282390 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:48:41.282400 | orchestrator | 2025-08-29 14:48:41.282411 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:48:41.282422 | orchestrator | Friday 29 August 2025 14:48:40 +0000 (0:00:00.203) 0:00:07.874 ********* 2025-08-29 14:48:41.282433 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:48:41.282444 | orchestrator | 2025-08-29 14:48:41.282455 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:48:41.282466 | orchestrator | Friday 29 August 2025 14:48:41 +0000 (0:00:00.207) 0:00:08.082 ********* 2025-08-29 14:48:41.282486 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:48:49.322977 | orchestrator | 2025-08-29 14:48:49.323081 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:48:49.323096 | orchestrator | Friday 29 August 2025 14:48:41 +0000 (0:00:00.186) 0:00:08.269 ********* 2025-08-29 14:48:49.323108 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-08-29 14:48:49.323120 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-08-29 14:48:49.323143 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-08-29 14:48:49.323155 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-08-29 14:48:49.323166 | orchestrator | 2025-08-29 14:48:49.323177 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:48:49.323188 | orchestrator | Friday 29 August 2025 14:48:42 +0000 (0:00:01.225) 0:00:09.494 ********* 2025-08-29 14:48:49.323199 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:48:49.323210 | orchestrator | 2025-08-29 14:48:49.323220 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:48:49.323231 | orchestrator | Friday 29 August 2025 14:48:42 +0000 (0:00:00.210) 0:00:09.705 ********* 2025-08-29 14:48:49.323242 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:48:49.323252 | orchestrator | 2025-08-29 14:48:49.323263 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:48:49.323277 | orchestrator | Friday 29 August 2025 14:48:42 +0000 (0:00:00.230) 0:00:09.935 ********* 2025-08-29 14:48:49.323297 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:48:49.323355 | orchestrator | 2025-08-29 14:48:49.323366 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:48:49.323377 | orchestrator | Friday 29 August 2025 14:48:43 +0000 (0:00:00.197) 0:00:10.133 ********* 2025-08-29 14:48:49.323388 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:48:49.323399 | orchestrator | 2025-08-29 14:48:49.323410 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-08-29 14:48:49.323421 | orchestrator | Friday 29 August 2025 14:48:43 +0000 (0:00:00.205) 0:00:10.339 ********* 2025-08-29 14:48:49.323431 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:48:49.323445 | orchestrator | 2025-08-29 14:48:49.323465 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-08-29 14:48:49.323478 | orchestrator | Friday 29 August 2025 14:48:43 +0000 (0:00:00.150) 0:00:10.489 ********* 2025-08-29 14:48:49.323550 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4c2f47a1-6693-5b64-9c97-de0e0041f7f6'}}) 2025-08-29 14:48:49.323563 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '218f7b56-b785-5eaf-b35f-b0ddc87960c6'}}) 2025-08-29 14:48:49.323576 | orchestrator | 2025-08-29 14:48:49.323588 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-08-29 14:48:49.323600 | orchestrator | Friday 29 August 2025 14:48:43 +0000 (0:00:00.194) 0:00:10.683 ********* 2025-08-29 14:48:49.323615 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-4c2f47a1-6693-5b64-9c97-de0e0041f7f6', 'data_vg': 'ceph-4c2f47a1-6693-5b64-9c97-de0e0041f7f6'}) 2025-08-29 14:48:49.323653 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-218f7b56-b785-5eaf-b35f-b0ddc87960c6', 'data_vg': 'ceph-218f7b56-b785-5eaf-b35f-b0ddc87960c6'}) 2025-08-29 14:48:49.323666 | orchestrator | 2025-08-29 14:48:49.323678 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-08-29 14:48:49.323706 | orchestrator | Friday 29 August 2025 14:48:45 +0000 (0:00:02.063) 0:00:12.747 ********* 2025-08-29 14:48:49.323719 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4c2f47a1-6693-5b64-9c97-de0e0041f7f6', 'data_vg': 'ceph-4c2f47a1-6693-5b64-9c97-de0e0041f7f6'})  2025-08-29 14:48:49.323733 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-218f7b56-b785-5eaf-b35f-b0ddc87960c6', 'data_vg': 'ceph-218f7b56-b785-5eaf-b35f-b0ddc87960c6'})  2025-08-29 14:48:49.323745 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:48:49.323757 | orchestrator | 2025-08-29 14:48:49.323769 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-08-29 14:48:49.323782 | orchestrator | Friday 29 August 2025 14:48:45 +0000 (0:00:00.161) 0:00:12.908 ********* 2025-08-29 14:48:49.323794 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-4c2f47a1-6693-5b64-9c97-de0e0041f7f6', 'data_vg': 'ceph-4c2f47a1-6693-5b64-9c97-de0e0041f7f6'}) 2025-08-29 14:48:49.323834 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-218f7b56-b785-5eaf-b35f-b0ddc87960c6', 'data_vg': 'ceph-218f7b56-b785-5eaf-b35f-b0ddc87960c6'}) 2025-08-29 14:48:49.323849 | orchestrator | 2025-08-29 14:48:49.323861 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-08-29 14:48:49.323872 | orchestrator | Friday 29 August 2025 14:48:47 +0000 (0:00:01.465) 0:00:14.374 ********* 2025-08-29 14:48:49.323883 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4c2f47a1-6693-5b64-9c97-de0e0041f7f6', 'data_vg': 'ceph-4c2f47a1-6693-5b64-9c97-de0e0041f7f6'})  2025-08-29 14:48:49.323895 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-218f7b56-b785-5eaf-b35f-b0ddc87960c6', 'data_vg': 'ceph-218f7b56-b785-5eaf-b35f-b0ddc87960c6'})  2025-08-29 14:48:49.323906 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:48:49.323916 | orchestrator | 2025-08-29 14:48:49.323927 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-08-29 14:48:49.323937 | orchestrator | Friday 29 August 2025 14:48:47 +0000 (0:00:00.144) 0:00:14.518 ********* 2025-08-29 14:48:49.323948 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:48:49.323959 | orchestrator | 2025-08-29 14:48:49.323969 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-08-29 14:48:49.323998 | orchestrator | Friday 29 August 2025 14:48:47 +0000 (0:00:00.129) 0:00:14.647 ********* 2025-08-29 14:48:49.324009 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4c2f47a1-6693-5b64-9c97-de0e0041f7f6', 'data_vg': 'ceph-4c2f47a1-6693-5b64-9c97-de0e0041f7f6'})  2025-08-29 14:48:49.324020 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-218f7b56-b785-5eaf-b35f-b0ddc87960c6', 'data_vg': 'ceph-218f7b56-b785-5eaf-b35f-b0ddc87960c6'})  2025-08-29 14:48:49.324031 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:48:49.324042 | orchestrator | 2025-08-29 14:48:49.324053 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-08-29 14:48:49.324063 | orchestrator | Friday 29 August 2025 14:48:47 +0000 (0:00:00.266) 0:00:14.914 ********* 2025-08-29 14:48:49.324074 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:48:49.324084 | orchestrator | 2025-08-29 14:48:49.324095 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-08-29 14:48:49.324106 | orchestrator | Friday 29 August 2025 14:48:48 +0000 (0:00:00.135) 0:00:15.050 ********* 2025-08-29 14:48:49.324117 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4c2f47a1-6693-5b64-9c97-de0e0041f7f6', 'data_vg': 'ceph-4c2f47a1-6693-5b64-9c97-de0e0041f7f6'})  2025-08-29 14:48:49.324136 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-218f7b56-b785-5eaf-b35f-b0ddc87960c6', 'data_vg': 'ceph-218f7b56-b785-5eaf-b35f-b0ddc87960c6'})  2025-08-29 14:48:49.324147 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:48:49.324157 | orchestrator | 2025-08-29 14:48:49.324168 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-08-29 14:48:49.324179 | orchestrator | Friday 29 August 2025 14:48:48 +0000 (0:00:00.144) 0:00:15.195 ********* 2025-08-29 14:48:49.324190 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:48:49.324200 | orchestrator | 2025-08-29 14:48:49.324211 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-08-29 14:48:49.324222 | orchestrator | Friday 29 August 2025 14:48:48 +0000 (0:00:00.128) 0:00:15.324 ********* 2025-08-29 14:48:49.324232 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4c2f47a1-6693-5b64-9c97-de0e0041f7f6', 'data_vg': 'ceph-4c2f47a1-6693-5b64-9c97-de0e0041f7f6'})  2025-08-29 14:48:49.324243 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-218f7b56-b785-5eaf-b35f-b0ddc87960c6', 'data_vg': 'ceph-218f7b56-b785-5eaf-b35f-b0ddc87960c6'})  2025-08-29 14:48:49.324254 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:48:49.324264 | orchestrator | 2025-08-29 14:48:49.324275 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-08-29 14:48:49.324286 | orchestrator | Friday 29 August 2025 14:48:48 +0000 (0:00:00.124) 0:00:15.448 ********* 2025-08-29 14:48:49.324297 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:48:49.324337 | orchestrator | 2025-08-29 14:48:49.324348 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-08-29 14:48:49.324359 | orchestrator | Friday 29 August 2025 14:48:48 +0000 (0:00:00.132) 0:00:15.581 ********* 2025-08-29 14:48:49.324370 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4c2f47a1-6693-5b64-9c97-de0e0041f7f6', 'data_vg': 'ceph-4c2f47a1-6693-5b64-9c97-de0e0041f7f6'})  2025-08-29 14:48:49.324381 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-218f7b56-b785-5eaf-b35f-b0ddc87960c6', 'data_vg': 'ceph-218f7b56-b785-5eaf-b35f-b0ddc87960c6'})  2025-08-29 14:48:49.324391 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:48:49.324402 | orchestrator | 2025-08-29 14:48:49.324413 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-08-29 14:48:49.324432 | orchestrator | Friday 29 August 2025 14:48:48 +0000 (0:00:00.143) 0:00:15.724 ********* 2025-08-29 14:48:49.324443 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4c2f47a1-6693-5b64-9c97-de0e0041f7f6', 'data_vg': 'ceph-4c2f47a1-6693-5b64-9c97-de0e0041f7f6'})  2025-08-29 14:48:49.324454 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-218f7b56-b785-5eaf-b35f-b0ddc87960c6', 'data_vg': 'ceph-218f7b56-b785-5eaf-b35f-b0ddc87960c6'})  2025-08-29 14:48:49.324464 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:48:49.324475 | orchestrator | 2025-08-29 14:48:49.324486 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-08-29 14:48:49.324497 | orchestrator | Friday 29 August 2025 14:48:48 +0000 (0:00:00.179) 0:00:15.904 ********* 2025-08-29 14:48:49.324507 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4c2f47a1-6693-5b64-9c97-de0e0041f7f6', 'data_vg': 'ceph-4c2f47a1-6693-5b64-9c97-de0e0041f7f6'})  2025-08-29 14:48:49.324518 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-218f7b56-b785-5eaf-b35f-b0ddc87960c6', 'data_vg': 'ceph-218f7b56-b785-5eaf-b35f-b0ddc87960c6'})  2025-08-29 14:48:49.324529 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:48:49.324540 | orchestrator | 2025-08-29 14:48:49.324550 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-08-29 14:48:49.324561 | orchestrator | Friday 29 August 2025 14:48:49 +0000 (0:00:00.137) 0:00:16.042 ********* 2025-08-29 14:48:49.324571 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:48:49.324590 | orchestrator | 2025-08-29 14:48:49.324601 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-08-29 14:48:49.324612 | orchestrator | Friday 29 August 2025 14:48:49 +0000 (0:00:00.137) 0:00:16.180 ********* 2025-08-29 14:48:49.324623 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:48:49.324634 | orchestrator | 2025-08-29 14:48:49.324650 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-08-29 14:48:55.839257 | orchestrator | Friday 29 August 2025 14:48:49 +0000 (0:00:00.130) 0:00:16.310 ********* 2025-08-29 14:48:55.839442 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:48:55.839486 | orchestrator | 2025-08-29 14:48:55.839501 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-08-29 14:48:55.839513 | orchestrator | Friday 29 August 2025 14:48:49 +0000 (0:00:00.142) 0:00:16.453 ********* 2025-08-29 14:48:55.839524 | orchestrator | ok: [testbed-node-3] => { 2025-08-29 14:48:55.839535 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-08-29 14:48:55.839547 | orchestrator | } 2025-08-29 14:48:55.839558 | orchestrator | 2025-08-29 14:48:55.839569 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-08-29 14:48:55.839580 | orchestrator | Friday 29 August 2025 14:48:49 +0000 (0:00:00.301) 0:00:16.754 ********* 2025-08-29 14:48:55.839591 | orchestrator | ok: [testbed-node-3] => { 2025-08-29 14:48:55.839602 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-08-29 14:48:55.839613 | orchestrator | } 2025-08-29 14:48:55.839624 | orchestrator | 2025-08-29 14:48:55.839635 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-08-29 14:48:55.839645 | orchestrator | Friday 29 August 2025 14:48:49 +0000 (0:00:00.121) 0:00:16.876 ********* 2025-08-29 14:48:55.839656 | orchestrator | ok: [testbed-node-3] => { 2025-08-29 14:48:55.839667 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-08-29 14:48:55.839678 | orchestrator | } 2025-08-29 14:48:55.839690 | orchestrator | 2025-08-29 14:48:55.839700 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-08-29 14:48:55.839711 | orchestrator | Friday 29 August 2025 14:48:50 +0000 (0:00:00.131) 0:00:17.007 ********* 2025-08-29 14:48:55.839722 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:48:55.839733 | orchestrator | 2025-08-29 14:48:55.839743 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-08-29 14:48:55.839754 | orchestrator | Friday 29 August 2025 14:48:50 +0000 (0:00:00.665) 0:00:17.673 ********* 2025-08-29 14:48:55.839765 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:48:55.839776 | orchestrator | 2025-08-29 14:48:55.839786 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-08-29 14:48:55.839797 | orchestrator | Friday 29 August 2025 14:48:51 +0000 (0:00:00.609) 0:00:18.282 ********* 2025-08-29 14:48:55.839808 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:48:55.839819 | orchestrator | 2025-08-29 14:48:55.839829 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-08-29 14:48:55.839840 | orchestrator | Friday 29 August 2025 14:48:51 +0000 (0:00:00.509) 0:00:18.792 ********* 2025-08-29 14:48:55.839851 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:48:55.839861 | orchestrator | 2025-08-29 14:48:55.839872 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-08-29 14:48:55.839883 | orchestrator | Friday 29 August 2025 14:48:51 +0000 (0:00:00.188) 0:00:18.980 ********* 2025-08-29 14:48:55.839893 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:48:55.839941 | orchestrator | 2025-08-29 14:48:55.839952 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-08-29 14:48:55.839963 | orchestrator | Friday 29 August 2025 14:48:52 +0000 (0:00:00.151) 0:00:19.131 ********* 2025-08-29 14:48:55.839973 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:48:55.839984 | orchestrator | 2025-08-29 14:48:55.839995 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-08-29 14:48:55.840005 | orchestrator | Friday 29 August 2025 14:48:52 +0000 (0:00:00.124) 0:00:19.255 ********* 2025-08-29 14:48:55.840163 | orchestrator | ok: [testbed-node-3] => { 2025-08-29 14:48:55.840177 | orchestrator |  "vgs_report": { 2025-08-29 14:48:55.840205 | orchestrator |  "vg": [] 2025-08-29 14:48:55.840216 | orchestrator |  } 2025-08-29 14:48:55.840241 | orchestrator | } 2025-08-29 14:48:55.840253 | orchestrator | 2025-08-29 14:48:55.840264 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-08-29 14:48:55.840275 | orchestrator | Friday 29 August 2025 14:48:52 +0000 (0:00:00.159) 0:00:19.415 ********* 2025-08-29 14:48:55.840285 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:48:55.840296 | orchestrator | 2025-08-29 14:48:55.840332 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-08-29 14:48:55.840344 | orchestrator | Friday 29 August 2025 14:48:52 +0000 (0:00:00.141) 0:00:19.557 ********* 2025-08-29 14:48:55.840355 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:48:55.840365 | orchestrator | 2025-08-29 14:48:55.840376 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-08-29 14:48:55.840386 | orchestrator | Friday 29 August 2025 14:48:52 +0000 (0:00:00.153) 0:00:19.710 ********* 2025-08-29 14:48:55.840397 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:48:55.840425 | orchestrator | 2025-08-29 14:48:55.840436 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-08-29 14:48:55.840447 | orchestrator | Friday 29 August 2025 14:48:53 +0000 (0:00:00.415) 0:00:20.126 ********* 2025-08-29 14:48:55.840457 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:48:55.840468 | orchestrator | 2025-08-29 14:48:55.840478 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-08-29 14:48:55.840489 | orchestrator | Friday 29 August 2025 14:48:53 +0000 (0:00:00.138) 0:00:20.265 ********* 2025-08-29 14:48:55.840499 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:48:55.840510 | orchestrator | 2025-08-29 14:48:55.840521 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-08-29 14:48:55.840532 | orchestrator | Friday 29 August 2025 14:48:53 +0000 (0:00:00.155) 0:00:20.420 ********* 2025-08-29 14:48:55.840543 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:48:55.840553 | orchestrator | 2025-08-29 14:48:55.840564 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-08-29 14:48:55.840575 | orchestrator | Friday 29 August 2025 14:48:53 +0000 (0:00:00.153) 0:00:20.574 ********* 2025-08-29 14:48:55.840585 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:48:55.840595 | orchestrator | 2025-08-29 14:48:55.840606 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-08-29 14:48:55.840617 | orchestrator | Friday 29 August 2025 14:48:53 +0000 (0:00:00.157) 0:00:20.732 ********* 2025-08-29 14:48:55.840627 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:48:55.840638 | orchestrator | 2025-08-29 14:48:55.840649 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-08-29 14:48:55.840679 | orchestrator | Friday 29 August 2025 14:48:53 +0000 (0:00:00.168) 0:00:20.900 ********* 2025-08-29 14:48:55.840691 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:48:55.840701 | orchestrator | 2025-08-29 14:48:55.840712 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-08-29 14:48:55.840723 | orchestrator | Friday 29 August 2025 14:48:54 +0000 (0:00:00.133) 0:00:21.033 ********* 2025-08-29 14:48:55.840733 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:48:55.840744 | orchestrator | 2025-08-29 14:48:55.840754 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-08-29 14:48:55.840765 | orchestrator | Friday 29 August 2025 14:48:54 +0000 (0:00:00.120) 0:00:21.154 ********* 2025-08-29 14:48:55.840776 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:48:55.840786 | orchestrator | 2025-08-29 14:48:55.840797 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-08-29 14:48:55.840807 | orchestrator | Friday 29 August 2025 14:48:54 +0000 (0:00:00.141) 0:00:21.295 ********* 2025-08-29 14:48:55.840818 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:48:55.840835 | orchestrator | 2025-08-29 14:48:55.840883 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-08-29 14:48:55.840896 | orchestrator | Friday 29 August 2025 14:48:54 +0000 (0:00:00.141) 0:00:21.437 ********* 2025-08-29 14:48:55.840906 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:48:55.840917 | orchestrator | 2025-08-29 14:48:55.840928 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-08-29 14:48:55.840939 | orchestrator | Friday 29 August 2025 14:48:54 +0000 (0:00:00.139) 0:00:21.577 ********* 2025-08-29 14:48:55.840949 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:48:55.840960 | orchestrator | 2025-08-29 14:48:55.840970 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-08-29 14:48:55.840981 | orchestrator | Friday 29 August 2025 14:48:54 +0000 (0:00:00.136) 0:00:21.714 ********* 2025-08-29 14:48:55.840992 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4c2f47a1-6693-5b64-9c97-de0e0041f7f6', 'data_vg': 'ceph-4c2f47a1-6693-5b64-9c97-de0e0041f7f6'})  2025-08-29 14:48:55.841005 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-218f7b56-b785-5eaf-b35f-b0ddc87960c6', 'data_vg': 'ceph-218f7b56-b785-5eaf-b35f-b0ddc87960c6'})  2025-08-29 14:48:55.841015 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:48:55.841026 | orchestrator | 2025-08-29 14:48:55.841036 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-08-29 14:48:55.841047 | orchestrator | Friday 29 August 2025 14:48:55 +0000 (0:00:00.299) 0:00:22.013 ********* 2025-08-29 14:48:55.841058 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4c2f47a1-6693-5b64-9c97-de0e0041f7f6', 'data_vg': 'ceph-4c2f47a1-6693-5b64-9c97-de0e0041f7f6'})  2025-08-29 14:48:55.841068 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-218f7b56-b785-5eaf-b35f-b0ddc87960c6', 'data_vg': 'ceph-218f7b56-b785-5eaf-b35f-b0ddc87960c6'})  2025-08-29 14:48:55.841079 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:48:55.841089 | orchestrator | 2025-08-29 14:48:55.841100 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-08-29 14:48:55.841110 | orchestrator | Friday 29 August 2025 14:48:55 +0000 (0:00:00.191) 0:00:22.205 ********* 2025-08-29 14:48:55.841122 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4c2f47a1-6693-5b64-9c97-de0e0041f7f6', 'data_vg': 'ceph-4c2f47a1-6693-5b64-9c97-de0e0041f7f6'})  2025-08-29 14:48:55.841133 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-218f7b56-b785-5eaf-b35f-b0ddc87960c6', 'data_vg': 'ceph-218f7b56-b785-5eaf-b35f-b0ddc87960c6'})  2025-08-29 14:48:55.841143 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:48:55.841154 | orchestrator | 2025-08-29 14:48:55.841165 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-08-29 14:48:55.841175 | orchestrator | Friday 29 August 2025 14:48:55 +0000 (0:00:00.168) 0:00:22.374 ********* 2025-08-29 14:48:55.841186 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4c2f47a1-6693-5b64-9c97-de0e0041f7f6', 'data_vg': 'ceph-4c2f47a1-6693-5b64-9c97-de0e0041f7f6'})  2025-08-29 14:48:55.841197 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-218f7b56-b785-5eaf-b35f-b0ddc87960c6', 'data_vg': 'ceph-218f7b56-b785-5eaf-b35f-b0ddc87960c6'})  2025-08-29 14:48:55.841207 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:48:55.841218 | orchestrator | 2025-08-29 14:48:55.841237 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-08-29 14:48:55.841249 | orchestrator | Friday 29 August 2025 14:48:55 +0000 (0:00:00.146) 0:00:22.521 ********* 2025-08-29 14:48:55.841259 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4c2f47a1-6693-5b64-9c97-de0e0041f7f6', 'data_vg': 'ceph-4c2f47a1-6693-5b64-9c97-de0e0041f7f6'})  2025-08-29 14:48:55.841270 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-218f7b56-b785-5eaf-b35f-b0ddc87960c6', 'data_vg': 'ceph-218f7b56-b785-5eaf-b35f-b0ddc87960c6'})  2025-08-29 14:48:55.841280 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:48:55.841299 | orchestrator | 2025-08-29 14:48:55.841400 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-08-29 14:48:55.841510 | orchestrator | Friday 29 August 2025 14:48:55 +0000 (0:00:00.169) 0:00:22.690 ********* 2025-08-29 14:48:55.841560 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4c2f47a1-6693-5b64-9c97-de0e0041f7f6', 'data_vg': 'ceph-4c2f47a1-6693-5b64-9c97-de0e0041f7f6'})  2025-08-29 14:48:55.841584 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-218f7b56-b785-5eaf-b35f-b0ddc87960c6', 'data_vg': 'ceph-218f7b56-b785-5eaf-b35f-b0ddc87960c6'})  2025-08-29 14:49:01.523694 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:49:01.523851 | orchestrator | 2025-08-29 14:49:01.523882 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-08-29 14:49:01.523905 | orchestrator | Friday 29 August 2025 14:48:55 +0000 (0:00:00.135) 0:00:22.825 ********* 2025-08-29 14:49:01.523951 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4c2f47a1-6693-5b64-9c97-de0e0041f7f6', 'data_vg': 'ceph-4c2f47a1-6693-5b64-9c97-de0e0041f7f6'})  2025-08-29 14:49:01.523974 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-218f7b56-b785-5eaf-b35f-b0ddc87960c6', 'data_vg': 'ceph-218f7b56-b785-5eaf-b35f-b0ddc87960c6'})  2025-08-29 14:49:01.523994 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:49:01.524014 | orchestrator | 2025-08-29 14:49:01.524036 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-08-29 14:49:01.524055 | orchestrator | Friday 29 August 2025 14:48:55 +0000 (0:00:00.134) 0:00:22.960 ********* 2025-08-29 14:49:01.524075 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4c2f47a1-6693-5b64-9c97-de0e0041f7f6', 'data_vg': 'ceph-4c2f47a1-6693-5b64-9c97-de0e0041f7f6'})  2025-08-29 14:49:01.524094 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-218f7b56-b785-5eaf-b35f-b0ddc87960c6', 'data_vg': 'ceph-218f7b56-b785-5eaf-b35f-b0ddc87960c6'})  2025-08-29 14:49:01.524106 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:49:01.524117 | orchestrator | 2025-08-29 14:49:01.524128 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-08-29 14:49:01.524139 | orchestrator | Friday 29 August 2025 14:48:56 +0000 (0:00:00.139) 0:00:23.099 ********* 2025-08-29 14:49:01.524150 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:49:01.524163 | orchestrator | 2025-08-29 14:49:01.524182 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-08-29 14:49:01.524202 | orchestrator | Friday 29 August 2025 14:48:56 +0000 (0:00:00.500) 0:00:23.600 ********* 2025-08-29 14:49:01.524222 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:49:01.524241 | orchestrator | 2025-08-29 14:49:01.524261 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-08-29 14:49:01.524272 | orchestrator | Friday 29 August 2025 14:48:57 +0000 (0:00:00.536) 0:00:24.136 ********* 2025-08-29 14:49:01.524283 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:49:01.524294 | orchestrator | 2025-08-29 14:49:01.524332 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-08-29 14:49:01.524347 | orchestrator | Friday 29 August 2025 14:48:57 +0000 (0:00:00.146) 0:00:24.283 ********* 2025-08-29 14:49:01.524358 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-218f7b56-b785-5eaf-b35f-b0ddc87960c6', 'vg_name': 'ceph-218f7b56-b785-5eaf-b35f-b0ddc87960c6'}) 2025-08-29 14:49:01.524371 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-4c2f47a1-6693-5b64-9c97-de0e0041f7f6', 'vg_name': 'ceph-4c2f47a1-6693-5b64-9c97-de0e0041f7f6'}) 2025-08-29 14:49:01.524381 | orchestrator | 2025-08-29 14:49:01.524399 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-08-29 14:49:01.524410 | orchestrator | Friday 29 August 2025 14:48:57 +0000 (0:00:00.192) 0:00:24.475 ********* 2025-08-29 14:49:01.524421 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4c2f47a1-6693-5b64-9c97-de0e0041f7f6', 'data_vg': 'ceph-4c2f47a1-6693-5b64-9c97-de0e0041f7f6'})  2025-08-29 14:49:01.524458 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-218f7b56-b785-5eaf-b35f-b0ddc87960c6', 'data_vg': 'ceph-218f7b56-b785-5eaf-b35f-b0ddc87960c6'})  2025-08-29 14:49:01.524470 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:49:01.524481 | orchestrator | 2025-08-29 14:49:01.524491 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-08-29 14:49:01.524502 | orchestrator | Friday 29 August 2025 14:48:57 +0000 (0:00:00.413) 0:00:24.888 ********* 2025-08-29 14:49:01.524513 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4c2f47a1-6693-5b64-9c97-de0e0041f7f6', 'data_vg': 'ceph-4c2f47a1-6693-5b64-9c97-de0e0041f7f6'})  2025-08-29 14:49:01.524524 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-218f7b56-b785-5eaf-b35f-b0ddc87960c6', 'data_vg': 'ceph-218f7b56-b785-5eaf-b35f-b0ddc87960c6'})  2025-08-29 14:49:01.524535 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:49:01.524546 | orchestrator | 2025-08-29 14:49:01.524557 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-08-29 14:49:01.524567 | orchestrator | Friday 29 August 2025 14:48:58 +0000 (0:00:00.172) 0:00:25.061 ********* 2025-08-29 14:49:01.524579 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4c2f47a1-6693-5b64-9c97-de0e0041f7f6', 'data_vg': 'ceph-4c2f47a1-6693-5b64-9c97-de0e0041f7f6'})  2025-08-29 14:49:01.524590 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-218f7b56-b785-5eaf-b35f-b0ddc87960c6', 'data_vg': 'ceph-218f7b56-b785-5eaf-b35f-b0ddc87960c6'})  2025-08-29 14:49:01.524601 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:49:01.524612 | orchestrator | 2025-08-29 14:49:01.524622 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-08-29 14:49:01.524633 | orchestrator | Friday 29 August 2025 14:48:58 +0000 (0:00:00.185) 0:00:25.246 ********* 2025-08-29 14:49:01.524644 | orchestrator | ok: [testbed-node-3] => { 2025-08-29 14:49:01.524655 | orchestrator |  "lvm_report": { 2025-08-29 14:49:01.524667 | orchestrator |  "lv": [ 2025-08-29 14:49:01.524678 | orchestrator |  { 2025-08-29 14:49:01.524709 | orchestrator |  "lv_name": "osd-block-218f7b56-b785-5eaf-b35f-b0ddc87960c6", 2025-08-29 14:49:01.524722 | orchestrator |  "vg_name": "ceph-218f7b56-b785-5eaf-b35f-b0ddc87960c6" 2025-08-29 14:49:01.524732 | orchestrator |  }, 2025-08-29 14:49:01.524743 | orchestrator |  { 2025-08-29 14:49:01.524754 | orchestrator |  "lv_name": "osd-block-4c2f47a1-6693-5b64-9c97-de0e0041f7f6", 2025-08-29 14:49:01.524765 | orchestrator |  "vg_name": "ceph-4c2f47a1-6693-5b64-9c97-de0e0041f7f6" 2025-08-29 14:49:01.524776 | orchestrator |  } 2025-08-29 14:49:01.524787 | orchestrator |  ], 2025-08-29 14:49:01.524797 | orchestrator |  "pv": [ 2025-08-29 14:49:01.524808 | orchestrator |  { 2025-08-29 14:49:01.524819 | orchestrator |  "pv_name": "/dev/sdb", 2025-08-29 14:49:01.524830 | orchestrator |  "vg_name": "ceph-4c2f47a1-6693-5b64-9c97-de0e0041f7f6" 2025-08-29 14:49:01.524840 | orchestrator |  }, 2025-08-29 14:49:01.524851 | orchestrator |  { 2025-08-29 14:49:01.524862 | orchestrator |  "pv_name": "/dev/sdc", 2025-08-29 14:49:01.524872 | orchestrator |  "vg_name": "ceph-218f7b56-b785-5eaf-b35f-b0ddc87960c6" 2025-08-29 14:49:01.524891 | orchestrator |  } 2025-08-29 14:49:01.524909 | orchestrator |  ] 2025-08-29 14:49:01.524928 | orchestrator |  } 2025-08-29 14:49:01.524947 | orchestrator | } 2025-08-29 14:49:01.524966 | orchestrator | 2025-08-29 14:49:01.524986 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-08-29 14:49:01.525004 | orchestrator | 2025-08-29 14:49:01.525024 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-08-29 14:49:01.525042 | orchestrator | Friday 29 August 2025 14:48:58 +0000 (0:00:00.358) 0:00:25.604 ********* 2025-08-29 14:49:01.525058 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-08-29 14:49:01.525080 | orchestrator | 2025-08-29 14:49:01.525091 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-08-29 14:49:01.525102 | orchestrator | Friday 29 August 2025 14:48:58 +0000 (0:00:00.285) 0:00:25.890 ********* 2025-08-29 14:49:01.525113 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:49:01.525123 | orchestrator | 2025-08-29 14:49:01.525134 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:49:01.525144 | orchestrator | Friday 29 August 2025 14:48:59 +0000 (0:00:00.239) 0:00:26.129 ********* 2025-08-29 14:49:01.525155 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-08-29 14:49:01.525166 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-08-29 14:49:01.525176 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-08-29 14:49:01.525187 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-08-29 14:49:01.525197 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-08-29 14:49:01.525208 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-08-29 14:49:01.525218 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-08-29 14:49:01.525235 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-08-29 14:49:01.525246 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-08-29 14:49:01.525256 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-08-29 14:49:01.525267 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-08-29 14:49:01.525278 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-08-29 14:49:01.525289 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-08-29 14:49:01.525300 | orchestrator | 2025-08-29 14:49:01.525353 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:49:01.525366 | orchestrator | Friday 29 August 2025 14:48:59 +0000 (0:00:00.425) 0:00:26.554 ********* 2025-08-29 14:49:01.525377 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:49:01.525387 | orchestrator | 2025-08-29 14:49:01.525398 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:49:01.525408 | orchestrator | Friday 29 August 2025 14:48:59 +0000 (0:00:00.238) 0:00:26.792 ********* 2025-08-29 14:49:01.525419 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:49:01.525429 | orchestrator | 2025-08-29 14:49:01.525440 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:49:01.525450 | orchestrator | Friday 29 August 2025 14:48:59 +0000 (0:00:00.199) 0:00:26.992 ********* 2025-08-29 14:49:01.525461 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:49:01.525472 | orchestrator | 2025-08-29 14:49:01.525482 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:49:01.525493 | orchestrator | Friday 29 August 2025 14:49:00 +0000 (0:00:00.656) 0:00:27.649 ********* 2025-08-29 14:49:01.525503 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:49:01.525514 | orchestrator | 2025-08-29 14:49:01.525524 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:49:01.525535 | orchestrator | Friday 29 August 2025 14:49:00 +0000 (0:00:00.228) 0:00:27.878 ********* 2025-08-29 14:49:01.525545 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:49:01.525556 | orchestrator | 2025-08-29 14:49:01.525566 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:49:01.525577 | orchestrator | Friday 29 August 2025 14:49:01 +0000 (0:00:00.209) 0:00:28.087 ********* 2025-08-29 14:49:01.525587 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:49:01.525598 | orchestrator | 2025-08-29 14:49:01.525617 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:49:01.525628 | orchestrator | Friday 29 August 2025 14:49:01 +0000 (0:00:00.217) 0:00:28.304 ********* 2025-08-29 14:49:01.525639 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:49:01.525650 | orchestrator | 2025-08-29 14:49:01.525670 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:49:12.348388 | orchestrator | Friday 29 August 2025 14:49:01 +0000 (0:00:00.207) 0:00:28.512 ********* 2025-08-29 14:49:12.348496 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:49:12.348510 | orchestrator | 2025-08-29 14:49:12.348521 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:49:12.348530 | orchestrator | Friday 29 August 2025 14:49:01 +0000 (0:00:00.200) 0:00:28.712 ********* 2025-08-29 14:49:12.348539 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_a59956ee-14fc-4c64-8315-f5435014482a) 2025-08-29 14:49:12.348550 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_a59956ee-14fc-4c64-8315-f5435014482a) 2025-08-29 14:49:12.348558 | orchestrator | 2025-08-29 14:49:12.348567 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:49:12.348576 | orchestrator | Friday 29 August 2025 14:49:02 +0000 (0:00:00.469) 0:00:29.181 ********* 2025-08-29 14:49:12.348585 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_fa9350c4-64bc-4afb-b502-f801a6f70a24) 2025-08-29 14:49:12.348594 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_fa9350c4-64bc-4afb-b502-f801a6f70a24) 2025-08-29 14:49:12.348603 | orchestrator | 2025-08-29 14:49:12.348611 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:49:12.348620 | orchestrator | Friday 29 August 2025 14:49:02 +0000 (0:00:00.469) 0:00:29.651 ********* 2025-08-29 14:49:12.348629 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_ce9b0281-40bf-44ad-b4c6-e5b614e2c1c9) 2025-08-29 14:49:12.348638 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_ce9b0281-40bf-44ad-b4c6-e5b614e2c1c9) 2025-08-29 14:49:12.348647 | orchestrator | 2025-08-29 14:49:12.348655 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:49:12.348664 | orchestrator | Friday 29 August 2025 14:49:03 +0000 (0:00:00.483) 0:00:30.134 ********* 2025-08-29 14:49:12.348673 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d4d11aa1-e648-4125-bb7f-b16cf1114c9f) 2025-08-29 14:49:12.348682 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d4d11aa1-e648-4125-bb7f-b16cf1114c9f) 2025-08-29 14:49:12.348691 | orchestrator | 2025-08-29 14:49:12.348706 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:49:12.348716 | orchestrator | Friday 29 August 2025 14:49:03 +0000 (0:00:00.453) 0:00:30.588 ********* 2025-08-29 14:49:12.348725 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-08-29 14:49:12.348733 | orchestrator | 2025-08-29 14:49:12.348742 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:49:12.348751 | orchestrator | Friday 29 August 2025 14:49:03 +0000 (0:00:00.352) 0:00:30.940 ********* 2025-08-29 14:49:12.348759 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-08-29 14:49:12.348769 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-08-29 14:49:12.348777 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-08-29 14:49:12.348786 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-08-29 14:49:12.348795 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-08-29 14:49:12.348803 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-08-29 14:49:12.348812 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-08-29 14:49:12.348842 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-08-29 14:49:12.348851 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-08-29 14:49:12.348861 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-08-29 14:49:12.348871 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-08-29 14:49:12.348881 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-08-29 14:49:12.348890 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-08-29 14:49:12.348900 | orchestrator | 2025-08-29 14:49:12.348925 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:49:12.348935 | orchestrator | Friday 29 August 2025 14:49:04 +0000 (0:00:00.669) 0:00:31.609 ********* 2025-08-29 14:49:12.348945 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:49:12.348954 | orchestrator | 2025-08-29 14:49:12.348964 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:49:12.348974 | orchestrator | Friday 29 August 2025 14:49:04 +0000 (0:00:00.239) 0:00:31.849 ********* 2025-08-29 14:49:12.348985 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:49:12.349001 | orchestrator | 2025-08-29 14:49:12.349011 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:49:12.349021 | orchestrator | Friday 29 August 2025 14:49:05 +0000 (0:00:00.202) 0:00:32.052 ********* 2025-08-29 14:49:12.349030 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:49:12.349040 | orchestrator | 2025-08-29 14:49:12.349050 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:49:12.349060 | orchestrator | Friday 29 August 2025 14:49:05 +0000 (0:00:00.227) 0:00:32.280 ********* 2025-08-29 14:49:12.349070 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:49:12.349080 | orchestrator | 2025-08-29 14:49:12.349106 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:49:12.349116 | orchestrator | Friday 29 August 2025 14:49:05 +0000 (0:00:00.229) 0:00:32.510 ********* 2025-08-29 14:49:12.349124 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:49:12.349134 | orchestrator | 2025-08-29 14:49:12.349149 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:49:12.349158 | orchestrator | Friday 29 August 2025 14:49:05 +0000 (0:00:00.218) 0:00:32.728 ********* 2025-08-29 14:49:12.349166 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:49:12.349175 | orchestrator | 2025-08-29 14:49:12.349184 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:49:12.349192 | orchestrator | Friday 29 August 2025 14:49:05 +0000 (0:00:00.216) 0:00:32.946 ********* 2025-08-29 14:49:12.349201 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:49:12.349209 | orchestrator | 2025-08-29 14:49:12.349218 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:49:12.349226 | orchestrator | Friday 29 August 2025 14:49:06 +0000 (0:00:00.212) 0:00:33.158 ********* 2025-08-29 14:49:12.349235 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:49:12.349243 | orchestrator | 2025-08-29 14:49:12.349252 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:49:12.349266 | orchestrator | Friday 29 August 2025 14:49:06 +0000 (0:00:00.234) 0:00:33.392 ********* 2025-08-29 14:49:12.349277 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-08-29 14:49:12.349285 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-08-29 14:49:12.349295 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-08-29 14:49:12.349303 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-08-29 14:49:12.349360 | orchestrator | 2025-08-29 14:49:12.349370 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:49:12.349378 | orchestrator | Friday 29 August 2025 14:49:07 +0000 (0:00:00.873) 0:00:34.266 ********* 2025-08-29 14:49:12.349395 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:49:12.349404 | orchestrator | 2025-08-29 14:49:12.349412 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:49:12.349421 | orchestrator | Friday 29 August 2025 14:49:07 +0000 (0:00:00.204) 0:00:34.471 ********* 2025-08-29 14:49:12.349430 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:49:12.349438 | orchestrator | 2025-08-29 14:49:12.349447 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:49:12.349455 | orchestrator | Friday 29 August 2025 14:49:07 +0000 (0:00:00.219) 0:00:34.691 ********* 2025-08-29 14:49:12.349464 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:49:12.349472 | orchestrator | 2025-08-29 14:49:12.349481 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:49:12.349490 | orchestrator | Friday 29 August 2025 14:49:08 +0000 (0:00:00.692) 0:00:35.384 ********* 2025-08-29 14:49:12.349498 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:49:12.349507 | orchestrator | 2025-08-29 14:49:12.349516 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-08-29 14:49:12.349524 | orchestrator | Friday 29 August 2025 14:49:08 +0000 (0:00:00.217) 0:00:35.601 ********* 2025-08-29 14:49:12.349538 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:49:12.349547 | orchestrator | 2025-08-29 14:49:12.349555 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-08-29 14:49:12.349564 | orchestrator | Friday 29 August 2025 14:49:08 +0000 (0:00:00.139) 0:00:35.740 ********* 2025-08-29 14:49:12.349573 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'cd5b7d9a-1dd4-5184-a319-6c247fab2039'}}) 2025-08-29 14:49:12.349582 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '95dc25c6-61fb-51c1-a723-34c7e57ec220'}}) 2025-08-29 14:49:12.349591 | orchestrator | 2025-08-29 14:49:12.349599 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-08-29 14:49:12.349608 | orchestrator | Friday 29 August 2025 14:49:08 +0000 (0:00:00.176) 0:00:35.917 ********* 2025-08-29 14:49:12.349618 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-cd5b7d9a-1dd4-5184-a319-6c247fab2039', 'data_vg': 'ceph-cd5b7d9a-1dd4-5184-a319-6c247fab2039'}) 2025-08-29 14:49:12.349628 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-95dc25c6-61fb-51c1-a723-34c7e57ec220', 'data_vg': 'ceph-95dc25c6-61fb-51c1-a723-34c7e57ec220'}) 2025-08-29 14:49:12.349637 | orchestrator | 2025-08-29 14:49:12.349646 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-08-29 14:49:12.349654 | orchestrator | Friday 29 August 2025 14:49:10 +0000 (0:00:01.867) 0:00:37.784 ********* 2025-08-29 14:49:12.349663 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cd5b7d9a-1dd4-5184-a319-6c247fab2039', 'data_vg': 'ceph-cd5b7d9a-1dd4-5184-a319-6c247fab2039'})  2025-08-29 14:49:12.349673 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-95dc25c6-61fb-51c1-a723-34c7e57ec220', 'data_vg': 'ceph-95dc25c6-61fb-51c1-a723-34c7e57ec220'})  2025-08-29 14:49:12.349682 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:49:12.349691 | orchestrator | 2025-08-29 14:49:12.349699 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-08-29 14:49:12.349708 | orchestrator | Friday 29 August 2025 14:49:10 +0000 (0:00:00.166) 0:00:37.951 ********* 2025-08-29 14:49:12.349723 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-cd5b7d9a-1dd4-5184-a319-6c247fab2039', 'data_vg': 'ceph-cd5b7d9a-1dd4-5184-a319-6c247fab2039'}) 2025-08-29 14:49:12.349734 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-95dc25c6-61fb-51c1-a723-34c7e57ec220', 'data_vg': 'ceph-95dc25c6-61fb-51c1-a723-34c7e57ec220'}) 2025-08-29 14:49:12.349742 | orchestrator | 2025-08-29 14:49:12.349758 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-08-29 14:49:18.363289 | orchestrator | Friday 29 August 2025 14:49:12 +0000 (0:00:01.376) 0:00:39.327 ********* 2025-08-29 14:49:18.363474 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cd5b7d9a-1dd4-5184-a319-6c247fab2039', 'data_vg': 'ceph-cd5b7d9a-1dd4-5184-a319-6c247fab2039'})  2025-08-29 14:49:18.363493 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-95dc25c6-61fb-51c1-a723-34c7e57ec220', 'data_vg': 'ceph-95dc25c6-61fb-51c1-a723-34c7e57ec220'})  2025-08-29 14:49:18.363505 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:49:18.363518 | orchestrator | 2025-08-29 14:49:18.363530 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-08-29 14:49:18.363541 | orchestrator | Friday 29 August 2025 14:49:12 +0000 (0:00:00.187) 0:00:39.515 ********* 2025-08-29 14:49:18.363552 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:49:18.363563 | orchestrator | 2025-08-29 14:49:18.363574 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-08-29 14:49:18.363585 | orchestrator | Friday 29 August 2025 14:49:12 +0000 (0:00:00.152) 0:00:39.667 ********* 2025-08-29 14:49:18.363604 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cd5b7d9a-1dd4-5184-a319-6c247fab2039', 'data_vg': 'ceph-cd5b7d9a-1dd4-5184-a319-6c247fab2039'})  2025-08-29 14:49:18.363616 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-95dc25c6-61fb-51c1-a723-34c7e57ec220', 'data_vg': 'ceph-95dc25c6-61fb-51c1-a723-34c7e57ec220'})  2025-08-29 14:49:18.363657 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:49:18.363676 | orchestrator | 2025-08-29 14:49:18.363694 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-08-29 14:49:18.363714 | orchestrator | Friday 29 August 2025 14:49:12 +0000 (0:00:00.171) 0:00:39.838 ********* 2025-08-29 14:49:18.363731 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:49:18.363743 | orchestrator | 2025-08-29 14:49:18.363753 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-08-29 14:49:18.363764 | orchestrator | Friday 29 August 2025 14:49:12 +0000 (0:00:00.143) 0:00:39.982 ********* 2025-08-29 14:49:18.363775 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cd5b7d9a-1dd4-5184-a319-6c247fab2039', 'data_vg': 'ceph-cd5b7d9a-1dd4-5184-a319-6c247fab2039'})  2025-08-29 14:49:18.363786 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-95dc25c6-61fb-51c1-a723-34c7e57ec220', 'data_vg': 'ceph-95dc25c6-61fb-51c1-a723-34c7e57ec220'})  2025-08-29 14:49:18.363797 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:49:18.363807 | orchestrator | 2025-08-29 14:49:18.363819 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-08-29 14:49:18.363831 | orchestrator | Friday 29 August 2025 14:49:13 +0000 (0:00:00.179) 0:00:40.161 ********* 2025-08-29 14:49:18.363859 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:49:18.363871 | orchestrator | 2025-08-29 14:49:18.363884 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-08-29 14:49:18.363898 | orchestrator | Friday 29 August 2025 14:49:13 +0000 (0:00:00.367) 0:00:40.528 ********* 2025-08-29 14:49:18.363910 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cd5b7d9a-1dd4-5184-a319-6c247fab2039', 'data_vg': 'ceph-cd5b7d9a-1dd4-5184-a319-6c247fab2039'})  2025-08-29 14:49:18.363923 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-95dc25c6-61fb-51c1-a723-34c7e57ec220', 'data_vg': 'ceph-95dc25c6-61fb-51c1-a723-34c7e57ec220'})  2025-08-29 14:49:18.363935 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:49:18.363948 | orchestrator | 2025-08-29 14:49:18.363960 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-08-29 14:49:18.363972 | orchestrator | Friday 29 August 2025 14:49:13 +0000 (0:00:00.158) 0:00:40.686 ********* 2025-08-29 14:49:18.363985 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:49:18.363998 | orchestrator | 2025-08-29 14:49:18.364011 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-08-29 14:49:18.364023 | orchestrator | Friday 29 August 2025 14:49:13 +0000 (0:00:00.159) 0:00:40.846 ********* 2025-08-29 14:49:18.364045 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cd5b7d9a-1dd4-5184-a319-6c247fab2039', 'data_vg': 'ceph-cd5b7d9a-1dd4-5184-a319-6c247fab2039'})  2025-08-29 14:49:18.364058 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-95dc25c6-61fb-51c1-a723-34c7e57ec220', 'data_vg': 'ceph-95dc25c6-61fb-51c1-a723-34c7e57ec220'})  2025-08-29 14:49:18.364070 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:49:18.364082 | orchestrator | 2025-08-29 14:49:18.364095 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-08-29 14:49:18.364108 | orchestrator | Friday 29 August 2025 14:49:14 +0000 (0:00:00.172) 0:00:41.018 ********* 2025-08-29 14:49:18.364120 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cd5b7d9a-1dd4-5184-a319-6c247fab2039', 'data_vg': 'ceph-cd5b7d9a-1dd4-5184-a319-6c247fab2039'})  2025-08-29 14:49:18.364132 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-95dc25c6-61fb-51c1-a723-34c7e57ec220', 'data_vg': 'ceph-95dc25c6-61fb-51c1-a723-34c7e57ec220'})  2025-08-29 14:49:18.364144 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:49:18.364156 | orchestrator | 2025-08-29 14:49:18.364168 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-08-29 14:49:18.364180 | orchestrator | Friday 29 August 2025 14:49:14 +0000 (0:00:00.169) 0:00:41.187 ********* 2025-08-29 14:49:18.364209 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cd5b7d9a-1dd4-5184-a319-6c247fab2039', 'data_vg': 'ceph-cd5b7d9a-1dd4-5184-a319-6c247fab2039'})  2025-08-29 14:49:18.364221 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-95dc25c6-61fb-51c1-a723-34c7e57ec220', 'data_vg': 'ceph-95dc25c6-61fb-51c1-a723-34c7e57ec220'})  2025-08-29 14:49:18.364232 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:49:18.364242 | orchestrator | 2025-08-29 14:49:18.364253 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-08-29 14:49:18.364264 | orchestrator | Friday 29 August 2025 14:49:14 +0000 (0:00:00.156) 0:00:41.344 ********* 2025-08-29 14:49:18.364275 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:49:18.364285 | orchestrator | 2025-08-29 14:49:18.364296 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-08-29 14:49:18.364329 | orchestrator | Friday 29 August 2025 14:49:14 +0000 (0:00:00.138) 0:00:41.482 ********* 2025-08-29 14:49:18.364342 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:49:18.364352 | orchestrator | 2025-08-29 14:49:18.364363 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-08-29 14:49:18.364374 | orchestrator | Friday 29 August 2025 14:49:14 +0000 (0:00:00.145) 0:00:41.628 ********* 2025-08-29 14:49:18.364384 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:49:18.364395 | orchestrator | 2025-08-29 14:49:18.364405 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-08-29 14:49:18.364416 | orchestrator | Friday 29 August 2025 14:49:14 +0000 (0:00:00.146) 0:00:41.774 ********* 2025-08-29 14:49:18.364426 | orchestrator | ok: [testbed-node-4] => { 2025-08-29 14:49:18.364437 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-08-29 14:49:18.364448 | orchestrator | } 2025-08-29 14:49:18.364466 | orchestrator | 2025-08-29 14:49:18.364481 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-08-29 14:49:18.364491 | orchestrator | Friday 29 August 2025 14:49:14 +0000 (0:00:00.142) 0:00:41.917 ********* 2025-08-29 14:49:18.364502 | orchestrator | ok: [testbed-node-4] => { 2025-08-29 14:49:18.364513 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-08-29 14:49:18.364524 | orchestrator | } 2025-08-29 14:49:18.364534 | orchestrator | 2025-08-29 14:49:18.364545 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-08-29 14:49:18.364556 | orchestrator | Friday 29 August 2025 14:49:15 +0000 (0:00:00.160) 0:00:42.077 ********* 2025-08-29 14:49:18.364566 | orchestrator | ok: [testbed-node-4] => { 2025-08-29 14:49:18.364577 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-08-29 14:49:18.364595 | orchestrator | } 2025-08-29 14:49:18.364606 | orchestrator | 2025-08-29 14:49:18.364617 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-08-29 14:49:18.364628 | orchestrator | Friday 29 August 2025 14:49:15 +0000 (0:00:00.155) 0:00:42.233 ********* 2025-08-29 14:49:18.364638 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:49:18.364649 | orchestrator | 2025-08-29 14:49:18.364660 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-08-29 14:49:18.364670 | orchestrator | Friday 29 August 2025 14:49:15 +0000 (0:00:00.756) 0:00:42.989 ********* 2025-08-29 14:49:18.364681 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:49:18.364692 | orchestrator | 2025-08-29 14:49:18.364703 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-08-29 14:49:18.364715 | orchestrator | Friday 29 August 2025 14:49:16 +0000 (0:00:00.602) 0:00:43.591 ********* 2025-08-29 14:49:18.364734 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:49:18.364753 | orchestrator | 2025-08-29 14:49:18.364769 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-08-29 14:49:18.364780 | orchestrator | Friday 29 August 2025 14:49:17 +0000 (0:00:00.580) 0:00:44.172 ********* 2025-08-29 14:49:18.364790 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:49:18.364801 | orchestrator | 2025-08-29 14:49:18.364811 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-08-29 14:49:18.364822 | orchestrator | Friday 29 August 2025 14:49:17 +0000 (0:00:00.150) 0:00:44.323 ********* 2025-08-29 14:49:18.364832 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:49:18.364843 | orchestrator | 2025-08-29 14:49:18.364853 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-08-29 14:49:18.364864 | orchestrator | Friday 29 August 2025 14:49:17 +0000 (0:00:00.117) 0:00:44.440 ********* 2025-08-29 14:49:18.364874 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:49:18.364885 | orchestrator | 2025-08-29 14:49:18.364895 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-08-29 14:49:18.364906 | orchestrator | Friday 29 August 2025 14:49:17 +0000 (0:00:00.125) 0:00:44.566 ********* 2025-08-29 14:49:18.364916 | orchestrator | ok: [testbed-node-4] => { 2025-08-29 14:49:18.364927 | orchestrator |  "vgs_report": { 2025-08-29 14:49:18.364939 | orchestrator |  "vg": [] 2025-08-29 14:49:18.364950 | orchestrator |  } 2025-08-29 14:49:18.364961 | orchestrator | } 2025-08-29 14:49:18.364971 | orchestrator | 2025-08-29 14:49:18.364982 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-08-29 14:49:18.364993 | orchestrator | Friday 29 August 2025 14:49:17 +0000 (0:00:00.150) 0:00:44.716 ********* 2025-08-29 14:49:18.365003 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:49:18.365014 | orchestrator | 2025-08-29 14:49:18.365024 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-08-29 14:49:18.365035 | orchestrator | Friday 29 August 2025 14:49:17 +0000 (0:00:00.145) 0:00:44.862 ********* 2025-08-29 14:49:18.365045 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:49:18.365056 | orchestrator | 2025-08-29 14:49:18.365074 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-08-29 14:49:18.365085 | orchestrator | Friday 29 August 2025 14:49:18 +0000 (0:00:00.197) 0:00:45.059 ********* 2025-08-29 14:49:18.365096 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:49:18.365107 | orchestrator | 2025-08-29 14:49:18.365117 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-08-29 14:49:18.365128 | orchestrator | Friday 29 August 2025 14:49:18 +0000 (0:00:00.130) 0:00:45.189 ********* 2025-08-29 14:49:18.365139 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:49:18.365149 | orchestrator | 2025-08-29 14:49:18.365160 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-08-29 14:49:18.365178 | orchestrator | Friday 29 August 2025 14:49:18 +0000 (0:00:00.159) 0:00:45.349 ********* 2025-08-29 14:49:23.387270 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:49:23.388041 | orchestrator | 2025-08-29 14:49:23.388088 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-08-29 14:49:23.388101 | orchestrator | Friday 29 August 2025 14:49:18 +0000 (0:00:00.151) 0:00:45.501 ********* 2025-08-29 14:49:23.388112 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:49:23.388121 | orchestrator | 2025-08-29 14:49:23.388130 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-08-29 14:49:23.388139 | orchestrator | Friday 29 August 2025 14:49:18 +0000 (0:00:00.366) 0:00:45.867 ********* 2025-08-29 14:49:23.388147 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:49:23.388156 | orchestrator | 2025-08-29 14:49:23.388165 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-08-29 14:49:23.388174 | orchestrator | Friday 29 August 2025 14:49:19 +0000 (0:00:00.147) 0:00:46.014 ********* 2025-08-29 14:49:23.388182 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:49:23.388191 | orchestrator | 2025-08-29 14:49:23.388200 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-08-29 14:49:23.388208 | orchestrator | Friday 29 August 2025 14:49:19 +0000 (0:00:00.136) 0:00:46.151 ********* 2025-08-29 14:49:23.388217 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:49:23.388225 | orchestrator | 2025-08-29 14:49:23.388234 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-08-29 14:49:23.388242 | orchestrator | Friday 29 August 2025 14:49:19 +0000 (0:00:00.146) 0:00:46.297 ********* 2025-08-29 14:49:23.388251 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:49:23.388259 | orchestrator | 2025-08-29 14:49:23.388268 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-08-29 14:49:23.388277 | orchestrator | Friday 29 August 2025 14:49:19 +0000 (0:00:00.149) 0:00:46.447 ********* 2025-08-29 14:49:23.388287 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:49:23.388301 | orchestrator | 2025-08-29 14:49:23.388361 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-08-29 14:49:23.388377 | orchestrator | Friday 29 August 2025 14:49:19 +0000 (0:00:00.151) 0:00:46.598 ********* 2025-08-29 14:49:23.388390 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:49:23.388404 | orchestrator | 2025-08-29 14:49:23.388418 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-08-29 14:49:23.388431 | orchestrator | Friday 29 August 2025 14:49:19 +0000 (0:00:00.142) 0:00:46.741 ********* 2025-08-29 14:49:23.388447 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:49:23.388462 | orchestrator | 2025-08-29 14:49:23.388477 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-08-29 14:49:23.388492 | orchestrator | Friday 29 August 2025 14:49:19 +0000 (0:00:00.135) 0:00:46.877 ********* 2025-08-29 14:49:23.388503 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:49:23.388512 | orchestrator | 2025-08-29 14:49:23.388521 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-08-29 14:49:23.388529 | orchestrator | Friday 29 August 2025 14:49:20 +0000 (0:00:00.149) 0:00:47.026 ********* 2025-08-29 14:49:23.388554 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cd5b7d9a-1dd4-5184-a319-6c247fab2039', 'data_vg': 'ceph-cd5b7d9a-1dd4-5184-a319-6c247fab2039'})  2025-08-29 14:49:23.388565 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-95dc25c6-61fb-51c1-a723-34c7e57ec220', 'data_vg': 'ceph-95dc25c6-61fb-51c1-a723-34c7e57ec220'})  2025-08-29 14:49:23.388573 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:49:23.388582 | orchestrator | 2025-08-29 14:49:23.388590 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-08-29 14:49:23.388599 | orchestrator | Friday 29 August 2025 14:49:20 +0000 (0:00:00.154) 0:00:47.181 ********* 2025-08-29 14:49:23.388607 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cd5b7d9a-1dd4-5184-a319-6c247fab2039', 'data_vg': 'ceph-cd5b7d9a-1dd4-5184-a319-6c247fab2039'})  2025-08-29 14:49:23.388616 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-95dc25c6-61fb-51c1-a723-34c7e57ec220', 'data_vg': 'ceph-95dc25c6-61fb-51c1-a723-34c7e57ec220'})  2025-08-29 14:49:23.388633 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:49:23.388642 | orchestrator | 2025-08-29 14:49:23.388650 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-08-29 14:49:23.388659 | orchestrator | Friday 29 August 2025 14:49:20 +0000 (0:00:00.169) 0:00:47.350 ********* 2025-08-29 14:49:23.388667 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cd5b7d9a-1dd4-5184-a319-6c247fab2039', 'data_vg': 'ceph-cd5b7d9a-1dd4-5184-a319-6c247fab2039'})  2025-08-29 14:49:23.388676 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-95dc25c6-61fb-51c1-a723-34c7e57ec220', 'data_vg': 'ceph-95dc25c6-61fb-51c1-a723-34c7e57ec220'})  2025-08-29 14:49:23.388685 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:49:23.388693 | orchestrator | 2025-08-29 14:49:23.388702 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-08-29 14:49:23.388710 | orchestrator | Friday 29 August 2025 14:49:20 +0000 (0:00:00.183) 0:00:47.534 ********* 2025-08-29 14:49:23.388719 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cd5b7d9a-1dd4-5184-a319-6c247fab2039', 'data_vg': 'ceph-cd5b7d9a-1dd4-5184-a319-6c247fab2039'})  2025-08-29 14:49:23.388727 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-95dc25c6-61fb-51c1-a723-34c7e57ec220', 'data_vg': 'ceph-95dc25c6-61fb-51c1-a723-34c7e57ec220'})  2025-08-29 14:49:23.388736 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:49:23.388744 | orchestrator | 2025-08-29 14:49:23.388753 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-08-29 14:49:23.388779 | orchestrator | Friday 29 August 2025 14:49:20 +0000 (0:00:00.399) 0:00:47.933 ********* 2025-08-29 14:49:23.388791 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cd5b7d9a-1dd4-5184-a319-6c247fab2039', 'data_vg': 'ceph-cd5b7d9a-1dd4-5184-a319-6c247fab2039'})  2025-08-29 14:49:23.388805 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-95dc25c6-61fb-51c1-a723-34c7e57ec220', 'data_vg': 'ceph-95dc25c6-61fb-51c1-a723-34c7e57ec220'})  2025-08-29 14:49:23.388817 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:49:23.388831 | orchestrator | 2025-08-29 14:49:23.388846 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-08-29 14:49:23.388860 | orchestrator | Friday 29 August 2025 14:49:21 +0000 (0:00:00.160) 0:00:48.093 ********* 2025-08-29 14:49:23.388874 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cd5b7d9a-1dd4-5184-a319-6c247fab2039', 'data_vg': 'ceph-cd5b7d9a-1dd4-5184-a319-6c247fab2039'})  2025-08-29 14:49:23.388888 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-95dc25c6-61fb-51c1-a723-34c7e57ec220', 'data_vg': 'ceph-95dc25c6-61fb-51c1-a723-34c7e57ec220'})  2025-08-29 14:49:23.388902 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:49:23.388912 | orchestrator | 2025-08-29 14:49:23.388920 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-08-29 14:49:23.388929 | orchestrator | Friday 29 August 2025 14:49:21 +0000 (0:00:00.166) 0:00:48.259 ********* 2025-08-29 14:49:23.388937 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cd5b7d9a-1dd4-5184-a319-6c247fab2039', 'data_vg': 'ceph-cd5b7d9a-1dd4-5184-a319-6c247fab2039'})  2025-08-29 14:49:23.388946 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-95dc25c6-61fb-51c1-a723-34c7e57ec220', 'data_vg': 'ceph-95dc25c6-61fb-51c1-a723-34c7e57ec220'})  2025-08-29 14:49:23.388954 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:49:23.388963 | orchestrator | 2025-08-29 14:49:23.388971 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-08-29 14:49:23.388980 | orchestrator | Friday 29 August 2025 14:49:21 +0000 (0:00:00.157) 0:00:48.417 ********* 2025-08-29 14:49:23.388988 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cd5b7d9a-1dd4-5184-a319-6c247fab2039', 'data_vg': 'ceph-cd5b7d9a-1dd4-5184-a319-6c247fab2039'})  2025-08-29 14:49:23.389004 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-95dc25c6-61fb-51c1-a723-34c7e57ec220', 'data_vg': 'ceph-95dc25c6-61fb-51c1-a723-34c7e57ec220'})  2025-08-29 14:49:23.389012 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:49:23.389021 | orchestrator | 2025-08-29 14:49:23.389035 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-08-29 14:49:23.389044 | orchestrator | Friday 29 August 2025 14:49:21 +0000 (0:00:00.152) 0:00:48.569 ********* 2025-08-29 14:49:23.389070 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:49:23.389079 | orchestrator | 2025-08-29 14:49:23.389088 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-08-29 14:49:23.389097 | orchestrator | Friday 29 August 2025 14:49:22 +0000 (0:00:00.546) 0:00:49.116 ********* 2025-08-29 14:49:23.389105 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:49:23.389114 | orchestrator | 2025-08-29 14:49:23.389122 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-08-29 14:49:23.389131 | orchestrator | Friday 29 August 2025 14:49:22 +0000 (0:00:00.529) 0:00:49.645 ********* 2025-08-29 14:49:23.389140 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:49:23.389148 | orchestrator | 2025-08-29 14:49:23.389157 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-08-29 14:49:23.389165 | orchestrator | Friday 29 August 2025 14:49:22 +0000 (0:00:00.162) 0:00:49.808 ********* 2025-08-29 14:49:23.389174 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-95dc25c6-61fb-51c1-a723-34c7e57ec220', 'vg_name': 'ceph-95dc25c6-61fb-51c1-a723-34c7e57ec220'}) 2025-08-29 14:49:23.389184 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-cd5b7d9a-1dd4-5184-a319-6c247fab2039', 'vg_name': 'ceph-cd5b7d9a-1dd4-5184-a319-6c247fab2039'}) 2025-08-29 14:49:23.389193 | orchestrator | 2025-08-29 14:49:23.389201 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-08-29 14:49:23.389210 | orchestrator | Friday 29 August 2025 14:49:22 +0000 (0:00:00.179) 0:00:49.988 ********* 2025-08-29 14:49:23.389219 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cd5b7d9a-1dd4-5184-a319-6c247fab2039', 'data_vg': 'ceph-cd5b7d9a-1dd4-5184-a319-6c247fab2039'})  2025-08-29 14:49:23.389227 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-95dc25c6-61fb-51c1-a723-34c7e57ec220', 'data_vg': 'ceph-95dc25c6-61fb-51c1-a723-34c7e57ec220'})  2025-08-29 14:49:23.389236 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:49:23.389244 | orchestrator | 2025-08-29 14:49:23.389253 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-08-29 14:49:23.389262 | orchestrator | Friday 29 August 2025 14:49:23 +0000 (0:00:00.228) 0:00:50.216 ********* 2025-08-29 14:49:23.389270 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cd5b7d9a-1dd4-5184-a319-6c247fab2039', 'data_vg': 'ceph-cd5b7d9a-1dd4-5184-a319-6c247fab2039'})  2025-08-29 14:49:23.389279 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-95dc25c6-61fb-51c1-a723-34c7e57ec220', 'data_vg': 'ceph-95dc25c6-61fb-51c1-a723-34c7e57ec220'})  2025-08-29 14:49:23.389295 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:49:29.901603 | orchestrator | 2025-08-29 14:49:29.901709 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-08-29 14:49:29.901717 | orchestrator | Friday 29 August 2025 14:49:23 +0000 (0:00:00.156) 0:00:50.372 ********* 2025-08-29 14:49:29.901723 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-cd5b7d9a-1dd4-5184-a319-6c247fab2039', 'data_vg': 'ceph-cd5b7d9a-1dd4-5184-a319-6c247fab2039'})  2025-08-29 14:49:29.901731 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-95dc25c6-61fb-51c1-a723-34c7e57ec220', 'data_vg': 'ceph-95dc25c6-61fb-51c1-a723-34c7e57ec220'})  2025-08-29 14:49:29.901735 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:49:29.901740 | orchestrator | 2025-08-29 14:49:29.901744 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-08-29 14:49:29.901748 | orchestrator | Friday 29 August 2025 14:49:23 +0000 (0:00:00.170) 0:00:50.543 ********* 2025-08-29 14:49:29.901771 | orchestrator | ok: [testbed-node-4] => { 2025-08-29 14:49:29.901776 | orchestrator |  "lvm_report": { 2025-08-29 14:49:29.901782 | orchestrator |  "lv": [ 2025-08-29 14:49:29.901786 | orchestrator |  { 2025-08-29 14:49:29.901790 | orchestrator |  "lv_name": "osd-block-95dc25c6-61fb-51c1-a723-34c7e57ec220", 2025-08-29 14:49:29.901795 | orchestrator |  "vg_name": "ceph-95dc25c6-61fb-51c1-a723-34c7e57ec220" 2025-08-29 14:49:29.901799 | orchestrator |  }, 2025-08-29 14:49:29.901802 | orchestrator |  { 2025-08-29 14:49:29.901806 | orchestrator |  "lv_name": "osd-block-cd5b7d9a-1dd4-5184-a319-6c247fab2039", 2025-08-29 14:49:29.901810 | orchestrator |  "vg_name": "ceph-cd5b7d9a-1dd4-5184-a319-6c247fab2039" 2025-08-29 14:49:29.901814 | orchestrator |  } 2025-08-29 14:49:29.901817 | orchestrator |  ], 2025-08-29 14:49:29.901821 | orchestrator |  "pv": [ 2025-08-29 14:49:29.901825 | orchestrator |  { 2025-08-29 14:49:29.901829 | orchestrator |  "pv_name": "/dev/sdb", 2025-08-29 14:49:29.901833 | orchestrator |  "vg_name": "ceph-cd5b7d9a-1dd4-5184-a319-6c247fab2039" 2025-08-29 14:49:29.901837 | orchestrator |  }, 2025-08-29 14:49:29.901840 | orchestrator |  { 2025-08-29 14:49:29.901844 | orchestrator |  "pv_name": "/dev/sdc", 2025-08-29 14:49:29.901848 | orchestrator |  "vg_name": "ceph-95dc25c6-61fb-51c1-a723-34c7e57ec220" 2025-08-29 14:49:29.901851 | orchestrator |  } 2025-08-29 14:49:29.901855 | orchestrator |  ] 2025-08-29 14:49:29.901859 | orchestrator |  } 2025-08-29 14:49:29.901863 | orchestrator | } 2025-08-29 14:49:29.901867 | orchestrator | 2025-08-29 14:49:29.901871 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-08-29 14:49:29.901875 | orchestrator | 2025-08-29 14:49:29.901879 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-08-29 14:49:29.901882 | orchestrator | Friday 29 August 2025 14:49:24 +0000 (0:00:00.534) 0:00:51.078 ********* 2025-08-29 14:49:29.901886 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-08-29 14:49:29.901891 | orchestrator | 2025-08-29 14:49:29.901895 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-08-29 14:49:29.901898 | orchestrator | Friday 29 August 2025 14:49:24 +0000 (0:00:00.270) 0:00:51.348 ********* 2025-08-29 14:49:29.901902 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:49:29.901907 | orchestrator | 2025-08-29 14:49:29.901911 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:49:29.901915 | orchestrator | Friday 29 August 2025 14:49:24 +0000 (0:00:00.270) 0:00:51.619 ********* 2025-08-29 14:49:29.901919 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-08-29 14:49:29.901922 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-08-29 14:49:29.901926 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-08-29 14:49:29.901930 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-08-29 14:49:29.901934 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-08-29 14:49:29.901937 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-08-29 14:49:29.901941 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-08-29 14:49:29.901945 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-08-29 14:49:29.901948 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-08-29 14:49:29.901952 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-08-29 14:49:29.901956 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-08-29 14:49:29.901964 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-08-29 14:49:29.901967 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-08-29 14:49:29.901971 | orchestrator | 2025-08-29 14:49:29.901975 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:49:29.901979 | orchestrator | Friday 29 August 2025 14:49:25 +0000 (0:00:00.430) 0:00:52.050 ********* 2025-08-29 14:49:29.901982 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:49:29.901989 | orchestrator | 2025-08-29 14:49:29.901993 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:49:29.901997 | orchestrator | Friday 29 August 2025 14:49:25 +0000 (0:00:00.221) 0:00:52.272 ********* 2025-08-29 14:49:29.902000 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:49:29.902004 | orchestrator | 2025-08-29 14:49:29.902008 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:49:29.902057 | orchestrator | Friday 29 August 2025 14:49:25 +0000 (0:00:00.228) 0:00:52.500 ********* 2025-08-29 14:49:29.902062 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:49:29.902066 | orchestrator | 2025-08-29 14:49:29.902069 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:49:29.902073 | orchestrator | Friday 29 August 2025 14:49:25 +0000 (0:00:00.206) 0:00:52.706 ********* 2025-08-29 14:49:29.902077 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:49:29.902081 | orchestrator | 2025-08-29 14:49:29.902084 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:49:29.902088 | orchestrator | Friday 29 August 2025 14:49:25 +0000 (0:00:00.205) 0:00:52.912 ********* 2025-08-29 14:49:29.902092 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:49:29.902096 | orchestrator | 2025-08-29 14:49:29.902099 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:49:29.902103 | orchestrator | Friday 29 August 2025 14:49:26 +0000 (0:00:00.215) 0:00:53.127 ********* 2025-08-29 14:49:29.902107 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:49:29.902110 | orchestrator | 2025-08-29 14:49:29.902114 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:49:29.902118 | orchestrator | Friday 29 August 2025 14:49:26 +0000 (0:00:00.632) 0:00:53.759 ********* 2025-08-29 14:49:29.902122 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:49:29.902130 | orchestrator | 2025-08-29 14:49:29.902134 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:49:29.902138 | orchestrator | Friday 29 August 2025 14:49:26 +0000 (0:00:00.222) 0:00:53.982 ********* 2025-08-29 14:49:29.902142 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:49:29.902145 | orchestrator | 2025-08-29 14:49:29.902149 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:49:29.902153 | orchestrator | Friday 29 August 2025 14:49:27 +0000 (0:00:00.205) 0:00:54.187 ********* 2025-08-29 14:49:29.902157 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_eca37a69-3c0f-4357-9670-f9669d9e69b8) 2025-08-29 14:49:29.902204 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_eca37a69-3c0f-4357-9670-f9669d9e69b8) 2025-08-29 14:49:29.902209 | orchestrator | 2025-08-29 14:49:29.902212 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:49:29.902216 | orchestrator | Friday 29 August 2025 14:49:27 +0000 (0:00:00.453) 0:00:54.640 ********* 2025-08-29 14:49:29.902220 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_00b08f76-6c14-40db-8d96-1843b494176b) 2025-08-29 14:49:29.902224 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_00b08f76-6c14-40db-8d96-1843b494176b) 2025-08-29 14:49:29.902228 | orchestrator | 2025-08-29 14:49:29.902231 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:49:29.902235 | orchestrator | Friday 29 August 2025 14:49:28 +0000 (0:00:00.467) 0:00:55.108 ********* 2025-08-29 14:49:29.902246 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_54964cbc-4c5d-4365-aa24-d13bcc6e495a) 2025-08-29 14:49:29.902250 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_54964cbc-4c5d-4365-aa24-d13bcc6e495a) 2025-08-29 14:49:29.902254 | orchestrator | 2025-08-29 14:49:29.902258 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:49:29.902261 | orchestrator | Friday 29 August 2025 14:49:28 +0000 (0:00:00.432) 0:00:55.540 ********* 2025-08-29 14:49:29.902265 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_8d4c2d77-38a8-4e70-8dcf-48e237e577e8) 2025-08-29 14:49:29.902269 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_8d4c2d77-38a8-4e70-8dcf-48e237e577e8) 2025-08-29 14:49:29.902272 | orchestrator | 2025-08-29 14:49:29.902276 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-08-29 14:49:29.902280 | orchestrator | Friday 29 August 2025 14:49:29 +0000 (0:00:00.517) 0:00:56.057 ********* 2025-08-29 14:49:29.902284 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-08-29 14:49:29.902287 | orchestrator | 2025-08-29 14:49:29.902291 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:49:29.902295 | orchestrator | Friday 29 August 2025 14:49:29 +0000 (0:00:00.354) 0:00:56.412 ********* 2025-08-29 14:49:29.902299 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-08-29 14:49:29.902302 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-08-29 14:49:29.902318 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-08-29 14:49:29.902323 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-08-29 14:49:29.902326 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-08-29 14:49:29.902330 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-08-29 14:49:29.902334 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-08-29 14:49:29.902337 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-08-29 14:49:29.902341 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-08-29 14:49:29.902345 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-08-29 14:49:29.902349 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-08-29 14:49:29.902355 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-08-29 14:49:40.812188 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-08-29 14:49:40.812304 | orchestrator | 2025-08-29 14:49:40.812382 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:49:40.812394 | orchestrator | Friday 29 August 2025 14:49:29 +0000 (0:00:00.466) 0:00:56.879 ********* 2025-08-29 14:49:40.812404 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:49:40.812416 | orchestrator | 2025-08-29 14:49:40.812426 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:49:40.812436 | orchestrator | Friday 29 August 2025 14:49:30 +0000 (0:00:00.208) 0:00:57.087 ********* 2025-08-29 14:49:40.812446 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:49:40.812456 | orchestrator | 2025-08-29 14:49:40.812472 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:49:40.812489 | orchestrator | Friday 29 August 2025 14:49:30 +0000 (0:00:00.222) 0:00:57.309 ********* 2025-08-29 14:49:40.812505 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:49:40.812522 | orchestrator | 2025-08-29 14:49:40.812533 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:49:40.812566 | orchestrator | Friday 29 August 2025 14:49:31 +0000 (0:00:00.700) 0:00:58.010 ********* 2025-08-29 14:49:40.812576 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:49:40.812586 | orchestrator | 2025-08-29 14:49:40.812595 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:49:40.812605 | orchestrator | Friday 29 August 2025 14:49:31 +0000 (0:00:00.245) 0:00:58.256 ********* 2025-08-29 14:49:40.812615 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:49:40.812624 | orchestrator | 2025-08-29 14:49:40.812634 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:49:40.812643 | orchestrator | Friday 29 August 2025 14:49:31 +0000 (0:00:00.270) 0:00:58.526 ********* 2025-08-29 14:49:40.812652 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:49:40.812662 | orchestrator | 2025-08-29 14:49:40.812671 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:49:40.812681 | orchestrator | Friday 29 August 2025 14:49:31 +0000 (0:00:00.234) 0:00:58.761 ********* 2025-08-29 14:49:40.812690 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:49:40.812699 | orchestrator | 2025-08-29 14:49:40.812709 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:49:40.812721 | orchestrator | Friday 29 August 2025 14:49:32 +0000 (0:00:00.238) 0:00:59.000 ********* 2025-08-29 14:49:40.812732 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:49:40.812743 | orchestrator | 2025-08-29 14:49:40.812754 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:49:40.812765 | orchestrator | Friday 29 August 2025 14:49:32 +0000 (0:00:00.203) 0:00:59.203 ********* 2025-08-29 14:49:40.812775 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-08-29 14:49:40.812787 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-08-29 14:49:40.812813 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-08-29 14:49:40.812825 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-08-29 14:49:40.812835 | orchestrator | 2025-08-29 14:49:40.812846 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:49:40.812858 | orchestrator | Friday 29 August 2025 14:49:32 +0000 (0:00:00.670) 0:00:59.874 ********* 2025-08-29 14:49:40.812869 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:49:40.812880 | orchestrator | 2025-08-29 14:49:40.812890 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:49:40.812901 | orchestrator | Friday 29 August 2025 14:49:33 +0000 (0:00:00.244) 0:01:00.118 ********* 2025-08-29 14:49:40.812911 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:49:40.812922 | orchestrator | 2025-08-29 14:49:40.812934 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:49:40.812945 | orchestrator | Friday 29 August 2025 14:49:33 +0000 (0:00:00.217) 0:01:00.335 ********* 2025-08-29 14:49:40.812956 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:49:40.812967 | orchestrator | 2025-08-29 14:49:40.812978 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-08-29 14:49:40.812988 | orchestrator | Friday 29 August 2025 14:49:33 +0000 (0:00:00.281) 0:01:00.617 ********* 2025-08-29 14:49:40.812999 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:49:40.813010 | orchestrator | 2025-08-29 14:49:40.813021 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-08-29 14:49:40.813032 | orchestrator | Friday 29 August 2025 14:49:33 +0000 (0:00:00.212) 0:01:00.830 ********* 2025-08-29 14:49:40.813042 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:49:40.813053 | orchestrator | 2025-08-29 14:49:40.813064 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-08-29 14:49:40.813075 | orchestrator | Friday 29 August 2025 14:49:34 +0000 (0:00:00.391) 0:01:01.221 ********* 2025-08-29 14:49:40.813084 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ea955146-254c-5a5a-83ec-c4f4ca16d6a1'}}) 2025-08-29 14:49:40.813094 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'aeb09036-0b6a-534a-a94a-678fcf7bc5df'}}) 2025-08-29 14:49:40.813111 | orchestrator | 2025-08-29 14:49:40.813121 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-08-29 14:49:40.813130 | orchestrator | Friday 29 August 2025 14:49:34 +0000 (0:00:00.339) 0:01:01.561 ********* 2025-08-29 14:49:40.813141 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-ea955146-254c-5a5a-83ec-c4f4ca16d6a1', 'data_vg': 'ceph-ea955146-254c-5a5a-83ec-c4f4ca16d6a1'}) 2025-08-29 14:49:40.813153 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-aeb09036-0b6a-534a-a94a-678fcf7bc5df', 'data_vg': 'ceph-aeb09036-0b6a-534a-a94a-678fcf7bc5df'}) 2025-08-29 14:49:40.813162 | orchestrator | 2025-08-29 14:49:40.813172 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-08-29 14:49:40.813199 | orchestrator | Friday 29 August 2025 14:49:36 +0000 (0:00:01.990) 0:01:03.552 ********* 2025-08-29 14:49:40.813210 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ea955146-254c-5a5a-83ec-c4f4ca16d6a1', 'data_vg': 'ceph-ea955146-254c-5a5a-83ec-c4f4ca16d6a1'})  2025-08-29 14:49:40.813221 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-aeb09036-0b6a-534a-a94a-678fcf7bc5df', 'data_vg': 'ceph-aeb09036-0b6a-534a-a94a-678fcf7bc5df'})  2025-08-29 14:49:40.813231 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:49:40.813240 | orchestrator | 2025-08-29 14:49:40.813250 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-08-29 14:49:40.813260 | orchestrator | Friday 29 August 2025 14:49:36 +0000 (0:00:00.173) 0:01:03.725 ********* 2025-08-29 14:49:40.813269 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-ea955146-254c-5a5a-83ec-c4f4ca16d6a1', 'data_vg': 'ceph-ea955146-254c-5a5a-83ec-c4f4ca16d6a1'}) 2025-08-29 14:49:40.813279 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-aeb09036-0b6a-534a-a94a-678fcf7bc5df', 'data_vg': 'ceph-aeb09036-0b6a-534a-a94a-678fcf7bc5df'}) 2025-08-29 14:49:40.813289 | orchestrator | 2025-08-29 14:49:40.813299 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-08-29 14:49:40.813329 | orchestrator | Friday 29 August 2025 14:49:39 +0000 (0:00:02.381) 0:01:06.107 ********* 2025-08-29 14:49:40.813347 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ea955146-254c-5a5a-83ec-c4f4ca16d6a1', 'data_vg': 'ceph-ea955146-254c-5a5a-83ec-c4f4ca16d6a1'})  2025-08-29 14:49:40.813364 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-aeb09036-0b6a-534a-a94a-678fcf7bc5df', 'data_vg': 'ceph-aeb09036-0b6a-534a-a94a-678fcf7bc5df'})  2025-08-29 14:49:40.813379 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:49:40.813396 | orchestrator | 2025-08-29 14:49:40.813410 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-08-29 14:49:40.813420 | orchestrator | Friday 29 August 2025 14:49:39 +0000 (0:00:00.169) 0:01:06.277 ********* 2025-08-29 14:49:40.813429 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:49:40.813439 | orchestrator | 2025-08-29 14:49:40.813448 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-08-29 14:49:40.813458 | orchestrator | Friday 29 August 2025 14:49:39 +0000 (0:00:00.143) 0:01:06.421 ********* 2025-08-29 14:49:40.813467 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ea955146-254c-5a5a-83ec-c4f4ca16d6a1', 'data_vg': 'ceph-ea955146-254c-5a5a-83ec-c4f4ca16d6a1'})  2025-08-29 14:49:40.813482 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-aeb09036-0b6a-534a-a94a-678fcf7bc5df', 'data_vg': 'ceph-aeb09036-0b6a-534a-a94a-678fcf7bc5df'})  2025-08-29 14:49:40.813492 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:49:40.813502 | orchestrator | 2025-08-29 14:49:40.813511 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-08-29 14:49:40.813520 | orchestrator | Friday 29 August 2025 14:49:39 +0000 (0:00:00.173) 0:01:06.594 ********* 2025-08-29 14:49:40.813530 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:49:40.813547 | orchestrator | 2025-08-29 14:49:40.813556 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-08-29 14:49:40.813566 | orchestrator | Friday 29 August 2025 14:49:39 +0000 (0:00:00.150) 0:01:06.744 ********* 2025-08-29 14:49:40.813575 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ea955146-254c-5a5a-83ec-c4f4ca16d6a1', 'data_vg': 'ceph-ea955146-254c-5a5a-83ec-c4f4ca16d6a1'})  2025-08-29 14:49:40.813585 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-aeb09036-0b6a-534a-a94a-678fcf7bc5df', 'data_vg': 'ceph-aeb09036-0b6a-534a-a94a-678fcf7bc5df'})  2025-08-29 14:49:40.813595 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:49:40.813604 | orchestrator | 2025-08-29 14:49:40.813614 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-08-29 14:49:40.813623 | orchestrator | Friday 29 August 2025 14:49:39 +0000 (0:00:00.174) 0:01:06.919 ********* 2025-08-29 14:49:40.813632 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:49:40.813642 | orchestrator | 2025-08-29 14:49:40.813651 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-08-29 14:49:40.813661 | orchestrator | Friday 29 August 2025 14:49:40 +0000 (0:00:00.158) 0:01:07.077 ********* 2025-08-29 14:49:40.813670 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ea955146-254c-5a5a-83ec-c4f4ca16d6a1', 'data_vg': 'ceph-ea955146-254c-5a5a-83ec-c4f4ca16d6a1'})  2025-08-29 14:49:40.813680 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-aeb09036-0b6a-534a-a94a-678fcf7bc5df', 'data_vg': 'ceph-aeb09036-0b6a-534a-a94a-678fcf7bc5df'})  2025-08-29 14:49:40.813690 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:49:40.813699 | orchestrator | 2025-08-29 14:49:40.813708 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-08-29 14:49:40.813718 | orchestrator | Friday 29 August 2025 14:49:40 +0000 (0:00:00.172) 0:01:07.249 ********* 2025-08-29 14:49:40.813727 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:49:40.813737 | orchestrator | 2025-08-29 14:49:40.813746 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-08-29 14:49:40.813756 | orchestrator | Friday 29 August 2025 14:49:40 +0000 (0:00:00.384) 0:01:07.634 ********* 2025-08-29 14:49:40.813772 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ea955146-254c-5a5a-83ec-c4f4ca16d6a1', 'data_vg': 'ceph-ea955146-254c-5a5a-83ec-c4f4ca16d6a1'})  2025-08-29 14:49:47.288862 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-aeb09036-0b6a-534a-a94a-678fcf7bc5df', 'data_vg': 'ceph-aeb09036-0b6a-534a-a94a-678fcf7bc5df'})  2025-08-29 14:49:47.288972 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:49:47.289015 | orchestrator | 2025-08-29 14:49:47.289029 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-08-29 14:49:47.289041 | orchestrator | Friday 29 August 2025 14:49:40 +0000 (0:00:00.164) 0:01:07.798 ********* 2025-08-29 14:49:47.289053 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ea955146-254c-5a5a-83ec-c4f4ca16d6a1', 'data_vg': 'ceph-ea955146-254c-5a5a-83ec-c4f4ca16d6a1'})  2025-08-29 14:49:47.289064 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-aeb09036-0b6a-534a-a94a-678fcf7bc5df', 'data_vg': 'ceph-aeb09036-0b6a-534a-a94a-678fcf7bc5df'})  2025-08-29 14:49:47.289075 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:49:47.289086 | orchestrator | 2025-08-29 14:49:47.289097 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-08-29 14:49:47.289108 | orchestrator | Friday 29 August 2025 14:49:40 +0000 (0:00:00.164) 0:01:07.963 ********* 2025-08-29 14:49:47.289119 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ea955146-254c-5a5a-83ec-c4f4ca16d6a1', 'data_vg': 'ceph-ea955146-254c-5a5a-83ec-c4f4ca16d6a1'})  2025-08-29 14:49:47.289130 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-aeb09036-0b6a-534a-a94a-678fcf7bc5df', 'data_vg': 'ceph-aeb09036-0b6a-534a-a94a-678fcf7bc5df'})  2025-08-29 14:49:47.289141 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:49:47.289175 | orchestrator | 2025-08-29 14:49:47.289187 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-08-29 14:49:47.289198 | orchestrator | Friday 29 August 2025 14:49:41 +0000 (0:00:00.183) 0:01:08.146 ********* 2025-08-29 14:49:47.289208 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:49:47.289218 | orchestrator | 2025-08-29 14:49:47.289229 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-08-29 14:49:47.289239 | orchestrator | Friday 29 August 2025 14:49:41 +0000 (0:00:00.149) 0:01:08.296 ********* 2025-08-29 14:49:47.289250 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:49:47.289260 | orchestrator | 2025-08-29 14:49:47.289271 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-08-29 14:49:47.289282 | orchestrator | Friday 29 August 2025 14:49:41 +0000 (0:00:00.151) 0:01:08.447 ********* 2025-08-29 14:49:47.289292 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:49:47.289303 | orchestrator | 2025-08-29 14:49:47.289346 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-08-29 14:49:47.289366 | orchestrator | Friday 29 August 2025 14:49:41 +0000 (0:00:00.148) 0:01:08.596 ********* 2025-08-29 14:49:47.289385 | orchestrator | ok: [testbed-node-5] => { 2025-08-29 14:49:47.289406 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-08-29 14:49:47.289423 | orchestrator | } 2025-08-29 14:49:47.289441 | orchestrator | 2025-08-29 14:49:47.289459 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-08-29 14:49:47.289476 | orchestrator | Friday 29 August 2025 14:49:41 +0000 (0:00:00.170) 0:01:08.766 ********* 2025-08-29 14:49:47.289492 | orchestrator | ok: [testbed-node-5] => { 2025-08-29 14:49:47.289509 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-08-29 14:49:47.289526 | orchestrator | } 2025-08-29 14:49:47.289542 | orchestrator | 2025-08-29 14:49:47.289589 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-08-29 14:49:47.289609 | orchestrator | Friday 29 August 2025 14:49:41 +0000 (0:00:00.157) 0:01:08.923 ********* 2025-08-29 14:49:47.289626 | orchestrator | ok: [testbed-node-5] => { 2025-08-29 14:49:47.289643 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-08-29 14:49:47.289660 | orchestrator | } 2025-08-29 14:49:47.289677 | orchestrator | 2025-08-29 14:49:47.289693 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-08-29 14:49:47.289709 | orchestrator | Friday 29 August 2025 14:49:42 +0000 (0:00:00.167) 0:01:09.091 ********* 2025-08-29 14:49:47.289725 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:49:47.289743 | orchestrator | 2025-08-29 14:49:47.289762 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-08-29 14:49:47.289780 | orchestrator | Friday 29 August 2025 14:49:42 +0000 (0:00:00.579) 0:01:09.671 ********* 2025-08-29 14:49:47.289796 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:49:47.289813 | orchestrator | 2025-08-29 14:49:47.289832 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-08-29 14:49:47.289852 | orchestrator | Friday 29 August 2025 14:49:43 +0000 (0:00:00.532) 0:01:10.203 ********* 2025-08-29 14:49:47.289864 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:49:47.289874 | orchestrator | 2025-08-29 14:49:47.289885 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-08-29 14:49:47.289896 | orchestrator | Friday 29 August 2025 14:49:43 +0000 (0:00:00.765) 0:01:10.969 ********* 2025-08-29 14:49:47.289906 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:49:47.289917 | orchestrator | 2025-08-29 14:49:47.289927 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-08-29 14:49:47.289938 | orchestrator | Friday 29 August 2025 14:49:44 +0000 (0:00:00.147) 0:01:11.116 ********* 2025-08-29 14:49:47.289948 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:49:47.289959 | orchestrator | 2025-08-29 14:49:47.289969 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-08-29 14:49:47.289980 | orchestrator | Friday 29 August 2025 14:49:44 +0000 (0:00:00.127) 0:01:11.244 ********* 2025-08-29 14:49:47.290003 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:49:47.290083 | orchestrator | 2025-08-29 14:49:47.290099 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-08-29 14:49:47.290110 | orchestrator | Friday 29 August 2025 14:49:44 +0000 (0:00:00.130) 0:01:11.375 ********* 2025-08-29 14:49:47.290121 | orchestrator | ok: [testbed-node-5] => { 2025-08-29 14:49:47.290132 | orchestrator |  "vgs_report": { 2025-08-29 14:49:47.290144 | orchestrator |  "vg": [] 2025-08-29 14:49:47.290173 | orchestrator |  } 2025-08-29 14:49:47.290186 | orchestrator | } 2025-08-29 14:49:47.290197 | orchestrator | 2025-08-29 14:49:47.290208 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-08-29 14:49:47.290218 | orchestrator | Friday 29 August 2025 14:49:44 +0000 (0:00:00.138) 0:01:11.514 ********* 2025-08-29 14:49:47.290229 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:49:47.290240 | orchestrator | 2025-08-29 14:49:47.290250 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-08-29 14:49:47.290261 | orchestrator | Friday 29 August 2025 14:49:44 +0000 (0:00:00.148) 0:01:11.662 ********* 2025-08-29 14:49:47.290271 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:49:47.290282 | orchestrator | 2025-08-29 14:49:47.290292 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-08-29 14:49:47.290303 | orchestrator | Friday 29 August 2025 14:49:44 +0000 (0:00:00.148) 0:01:11.811 ********* 2025-08-29 14:49:47.290342 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:49:47.290353 | orchestrator | 2025-08-29 14:49:47.290364 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-08-29 14:49:47.290375 | orchestrator | Friday 29 August 2025 14:49:44 +0000 (0:00:00.160) 0:01:11.972 ********* 2025-08-29 14:49:47.290385 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:49:47.290396 | orchestrator | 2025-08-29 14:49:47.290406 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-08-29 14:49:47.290435 | orchestrator | Friday 29 August 2025 14:49:45 +0000 (0:00:00.132) 0:01:12.104 ********* 2025-08-29 14:49:47.290446 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:49:47.290457 | orchestrator | 2025-08-29 14:49:47.290468 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-08-29 14:49:47.290478 | orchestrator | Friday 29 August 2025 14:49:45 +0000 (0:00:00.155) 0:01:12.260 ********* 2025-08-29 14:49:47.290489 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:49:47.290499 | orchestrator | 2025-08-29 14:49:47.290510 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-08-29 14:49:47.290520 | orchestrator | Friday 29 August 2025 14:49:45 +0000 (0:00:00.142) 0:01:12.403 ********* 2025-08-29 14:49:47.290531 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:49:47.290541 | orchestrator | 2025-08-29 14:49:47.290552 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-08-29 14:49:47.290562 | orchestrator | Friday 29 August 2025 14:49:45 +0000 (0:00:00.147) 0:01:12.550 ********* 2025-08-29 14:49:47.290573 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:49:47.290583 | orchestrator | 2025-08-29 14:49:47.290594 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-08-29 14:49:47.290604 | orchestrator | Friday 29 August 2025 14:49:45 +0000 (0:00:00.146) 0:01:12.697 ********* 2025-08-29 14:49:47.290615 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:49:47.290625 | orchestrator | 2025-08-29 14:49:47.290636 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-08-29 14:49:47.290652 | orchestrator | Friday 29 August 2025 14:49:46 +0000 (0:00:00.390) 0:01:13.087 ********* 2025-08-29 14:49:47.290663 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:49:47.290673 | orchestrator | 2025-08-29 14:49:47.290684 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-08-29 14:49:47.290694 | orchestrator | Friday 29 August 2025 14:49:46 +0000 (0:00:00.142) 0:01:13.230 ********* 2025-08-29 14:49:47.290705 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:49:47.290725 | orchestrator | 2025-08-29 14:49:47.290736 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-08-29 14:49:47.290754 | orchestrator | Friday 29 August 2025 14:49:46 +0000 (0:00:00.156) 0:01:13.387 ********* 2025-08-29 14:49:47.290765 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:49:47.290776 | orchestrator | 2025-08-29 14:49:47.290787 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-08-29 14:49:47.290798 | orchestrator | Friday 29 August 2025 14:49:46 +0000 (0:00:00.156) 0:01:13.544 ********* 2025-08-29 14:49:47.290808 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:49:47.290819 | orchestrator | 2025-08-29 14:49:47.290829 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-08-29 14:49:47.290840 | orchestrator | Friday 29 August 2025 14:49:46 +0000 (0:00:00.138) 0:01:13.683 ********* 2025-08-29 14:49:47.290850 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:49:47.290862 | orchestrator | 2025-08-29 14:49:47.290880 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-08-29 14:49:47.290899 | orchestrator | Friday 29 August 2025 14:49:46 +0000 (0:00:00.129) 0:01:13.813 ********* 2025-08-29 14:49:47.290915 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ea955146-254c-5a5a-83ec-c4f4ca16d6a1', 'data_vg': 'ceph-ea955146-254c-5a5a-83ec-c4f4ca16d6a1'})  2025-08-29 14:49:47.290926 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-aeb09036-0b6a-534a-a94a-678fcf7bc5df', 'data_vg': 'ceph-aeb09036-0b6a-534a-a94a-678fcf7bc5df'})  2025-08-29 14:49:47.290946 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:49:47.290965 | orchestrator | 2025-08-29 14:49:47.290983 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-08-29 14:49:47.290994 | orchestrator | Friday 29 August 2025 14:49:46 +0000 (0:00:00.170) 0:01:13.983 ********* 2025-08-29 14:49:47.291005 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ea955146-254c-5a5a-83ec-c4f4ca16d6a1', 'data_vg': 'ceph-ea955146-254c-5a5a-83ec-c4f4ca16d6a1'})  2025-08-29 14:49:47.291015 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-aeb09036-0b6a-534a-a94a-678fcf7bc5df', 'data_vg': 'ceph-aeb09036-0b6a-534a-a94a-678fcf7bc5df'})  2025-08-29 14:49:47.291026 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:49:47.291037 | orchestrator | 2025-08-29 14:49:47.291047 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-08-29 14:49:47.291058 | orchestrator | Friday 29 August 2025 14:49:47 +0000 (0:00:00.150) 0:01:14.134 ********* 2025-08-29 14:49:47.291077 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ea955146-254c-5a5a-83ec-c4f4ca16d6a1', 'data_vg': 'ceph-ea955146-254c-5a5a-83ec-c4f4ca16d6a1'})  2025-08-29 14:49:50.249053 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-aeb09036-0b6a-534a-a94a-678fcf7bc5df', 'data_vg': 'ceph-aeb09036-0b6a-534a-a94a-678fcf7bc5df'})  2025-08-29 14:49:50.249181 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:49:50.249198 | orchestrator | 2025-08-29 14:49:50.249227 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-08-29 14:49:50.249241 | orchestrator | Friday 29 August 2025 14:49:47 +0000 (0:00:00.143) 0:01:14.278 ********* 2025-08-29 14:49:50.249253 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ea955146-254c-5a5a-83ec-c4f4ca16d6a1', 'data_vg': 'ceph-ea955146-254c-5a5a-83ec-c4f4ca16d6a1'})  2025-08-29 14:49:50.249264 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-aeb09036-0b6a-534a-a94a-678fcf7bc5df', 'data_vg': 'ceph-aeb09036-0b6a-534a-a94a-678fcf7bc5df'})  2025-08-29 14:49:50.249275 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:49:50.249286 | orchestrator | 2025-08-29 14:49:50.249297 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-08-29 14:49:50.249367 | orchestrator | Friday 29 August 2025 14:49:47 +0000 (0:00:00.151) 0:01:14.429 ********* 2025-08-29 14:49:50.249387 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ea955146-254c-5a5a-83ec-c4f4ca16d6a1', 'data_vg': 'ceph-ea955146-254c-5a5a-83ec-c4f4ca16d6a1'})  2025-08-29 14:49:50.249433 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-aeb09036-0b6a-534a-a94a-678fcf7bc5df', 'data_vg': 'ceph-aeb09036-0b6a-534a-a94a-678fcf7bc5df'})  2025-08-29 14:49:50.249446 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:49:50.249457 | orchestrator | 2025-08-29 14:49:50.249468 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-08-29 14:49:50.249478 | orchestrator | Friday 29 August 2025 14:49:47 +0000 (0:00:00.154) 0:01:14.584 ********* 2025-08-29 14:49:50.249489 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ea955146-254c-5a5a-83ec-c4f4ca16d6a1', 'data_vg': 'ceph-ea955146-254c-5a5a-83ec-c4f4ca16d6a1'})  2025-08-29 14:49:50.249503 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-aeb09036-0b6a-534a-a94a-678fcf7bc5df', 'data_vg': 'ceph-aeb09036-0b6a-534a-a94a-678fcf7bc5df'})  2025-08-29 14:49:50.249521 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:49:50.249538 | orchestrator | 2025-08-29 14:49:50.249574 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-08-29 14:49:50.249594 | orchestrator | Friday 29 August 2025 14:49:47 +0000 (0:00:00.126) 0:01:14.711 ********* 2025-08-29 14:49:50.249614 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ea955146-254c-5a5a-83ec-c4f4ca16d6a1', 'data_vg': 'ceph-ea955146-254c-5a5a-83ec-c4f4ca16d6a1'})  2025-08-29 14:49:50.249635 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-aeb09036-0b6a-534a-a94a-678fcf7bc5df', 'data_vg': 'ceph-aeb09036-0b6a-534a-a94a-678fcf7bc5df'})  2025-08-29 14:49:50.249676 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:49:50.249699 | orchestrator | 2025-08-29 14:49:50.249719 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-08-29 14:49:50.249740 | orchestrator | Friday 29 August 2025 14:49:48 +0000 (0:00:00.291) 0:01:15.002 ********* 2025-08-29 14:49:50.249764 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ea955146-254c-5a5a-83ec-c4f4ca16d6a1', 'data_vg': 'ceph-ea955146-254c-5a5a-83ec-c4f4ca16d6a1'})  2025-08-29 14:49:50.249785 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-aeb09036-0b6a-534a-a94a-678fcf7bc5df', 'data_vg': 'ceph-aeb09036-0b6a-534a-a94a-678fcf7bc5df'})  2025-08-29 14:49:50.249803 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:49:50.249823 | orchestrator | 2025-08-29 14:49:50.249841 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-08-29 14:49:50.249860 | orchestrator | Friday 29 August 2025 14:49:48 +0000 (0:00:00.152) 0:01:15.155 ********* 2025-08-29 14:49:50.249876 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:49:50.249897 | orchestrator | 2025-08-29 14:49:50.249916 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-08-29 14:49:50.249936 | orchestrator | Friday 29 August 2025 14:49:48 +0000 (0:00:00.521) 0:01:15.677 ********* 2025-08-29 14:49:50.249956 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:49:50.249974 | orchestrator | 2025-08-29 14:49:50.249993 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-08-29 14:49:50.250078 | orchestrator | Friday 29 August 2025 14:49:49 +0000 (0:00:00.600) 0:01:16.278 ********* 2025-08-29 14:49:50.250092 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:49:50.250109 | orchestrator | 2025-08-29 14:49:50.250129 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-08-29 14:49:50.250148 | orchestrator | Friday 29 August 2025 14:49:49 +0000 (0:00:00.137) 0:01:16.415 ********* 2025-08-29 14:49:50.250167 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-aeb09036-0b6a-534a-a94a-678fcf7bc5df', 'vg_name': 'ceph-aeb09036-0b6a-534a-a94a-678fcf7bc5df'}) 2025-08-29 14:49:50.250180 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-ea955146-254c-5a5a-83ec-c4f4ca16d6a1', 'vg_name': 'ceph-ea955146-254c-5a5a-83ec-c4f4ca16d6a1'}) 2025-08-29 14:49:50.250195 | orchestrator | 2025-08-29 14:49:50.250215 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-08-29 14:49:50.250252 | orchestrator | Friday 29 August 2025 14:49:49 +0000 (0:00:00.163) 0:01:16.579 ********* 2025-08-29 14:49:50.250300 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ea955146-254c-5a5a-83ec-c4f4ca16d6a1', 'data_vg': 'ceph-ea955146-254c-5a5a-83ec-c4f4ca16d6a1'})  2025-08-29 14:49:50.250385 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-aeb09036-0b6a-534a-a94a-678fcf7bc5df', 'data_vg': 'ceph-aeb09036-0b6a-534a-a94a-678fcf7bc5df'})  2025-08-29 14:49:50.250404 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:49:50.250423 | orchestrator | 2025-08-29 14:49:50.250441 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-08-29 14:49:50.250460 | orchestrator | Friday 29 August 2025 14:49:49 +0000 (0:00:00.174) 0:01:16.753 ********* 2025-08-29 14:49:50.250480 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ea955146-254c-5a5a-83ec-c4f4ca16d6a1', 'data_vg': 'ceph-ea955146-254c-5a5a-83ec-c4f4ca16d6a1'})  2025-08-29 14:49:50.250498 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-aeb09036-0b6a-534a-a94a-678fcf7bc5df', 'data_vg': 'ceph-aeb09036-0b6a-534a-a94a-678fcf7bc5df'})  2025-08-29 14:49:50.250513 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:49:50.250523 | orchestrator | 2025-08-29 14:49:50.250534 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-08-29 14:49:50.250545 | orchestrator | Friday 29 August 2025 14:49:49 +0000 (0:00:00.157) 0:01:16.911 ********* 2025-08-29 14:49:50.250555 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ea955146-254c-5a5a-83ec-c4f4ca16d6a1', 'data_vg': 'ceph-ea955146-254c-5a5a-83ec-c4f4ca16d6a1'})  2025-08-29 14:49:50.250566 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-aeb09036-0b6a-534a-a94a-678fcf7bc5df', 'data_vg': 'ceph-aeb09036-0b6a-534a-a94a-678fcf7bc5df'})  2025-08-29 14:49:50.250576 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:49:50.250601 | orchestrator | 2025-08-29 14:49:50.250632 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-08-29 14:49:50.250643 | orchestrator | Friday 29 August 2025 14:49:50 +0000 (0:00:00.164) 0:01:17.075 ********* 2025-08-29 14:49:50.250653 | orchestrator | ok: [testbed-node-5] => { 2025-08-29 14:49:50.250665 | orchestrator |  "lvm_report": { 2025-08-29 14:49:50.250676 | orchestrator |  "lv": [ 2025-08-29 14:49:50.250687 | orchestrator |  { 2025-08-29 14:49:50.250698 | orchestrator |  "lv_name": "osd-block-aeb09036-0b6a-534a-a94a-678fcf7bc5df", 2025-08-29 14:49:50.250719 | orchestrator |  "vg_name": "ceph-aeb09036-0b6a-534a-a94a-678fcf7bc5df" 2025-08-29 14:49:50.250730 | orchestrator |  }, 2025-08-29 14:49:50.250741 | orchestrator |  { 2025-08-29 14:49:50.250752 | orchestrator |  "lv_name": "osd-block-ea955146-254c-5a5a-83ec-c4f4ca16d6a1", 2025-08-29 14:49:50.250762 | orchestrator |  "vg_name": "ceph-ea955146-254c-5a5a-83ec-c4f4ca16d6a1" 2025-08-29 14:49:50.250773 | orchestrator |  } 2025-08-29 14:49:50.250783 | orchestrator |  ], 2025-08-29 14:49:50.250794 | orchestrator |  "pv": [ 2025-08-29 14:49:50.250804 | orchestrator |  { 2025-08-29 14:49:50.250815 | orchestrator |  "pv_name": "/dev/sdb", 2025-08-29 14:49:50.250826 | orchestrator |  "vg_name": "ceph-ea955146-254c-5a5a-83ec-c4f4ca16d6a1" 2025-08-29 14:49:50.250836 | orchestrator |  }, 2025-08-29 14:49:50.250847 | orchestrator |  { 2025-08-29 14:49:50.250858 | orchestrator |  "pv_name": "/dev/sdc", 2025-08-29 14:49:50.250868 | orchestrator |  "vg_name": "ceph-aeb09036-0b6a-534a-a94a-678fcf7bc5df" 2025-08-29 14:49:50.250879 | orchestrator |  } 2025-08-29 14:49:50.250889 | orchestrator |  ] 2025-08-29 14:49:50.250900 | orchestrator |  } 2025-08-29 14:49:50.250911 | orchestrator | } 2025-08-29 14:49:50.250922 | orchestrator | 2025-08-29 14:49:50.250932 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:49:50.250953 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-08-29 14:49:50.250965 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-08-29 14:49:50.250976 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-08-29 14:49:50.250987 | orchestrator | 2025-08-29 14:49:50.250997 | orchestrator | 2025-08-29 14:49:50.251014 | orchestrator | 2025-08-29 14:49:50.251033 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:49:50.251051 | orchestrator | Friday 29 August 2025 14:49:50 +0000 (0:00:00.133) 0:01:17.209 ********* 2025-08-29 14:49:50.251064 | orchestrator | =============================================================================== 2025-08-29 14:49:50.251074 | orchestrator | Create block VGs -------------------------------------------------------- 5.92s 2025-08-29 14:49:50.251085 | orchestrator | Create block LVs -------------------------------------------------------- 5.22s 2025-08-29 14:49:50.251096 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 2.00s 2025-08-29 14:49:50.251106 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.86s 2025-08-29 14:49:50.251116 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.74s 2025-08-29 14:49:50.251127 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.67s 2025-08-29 14:49:50.251137 | orchestrator | Add known partitions to the list of available block devices ------------- 1.57s 2025-08-29 14:49:50.251148 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.57s 2025-08-29 14:49:50.251169 | orchestrator | Add known links to the list of available block devices ------------------ 1.29s 2025-08-29 14:49:50.521727 | orchestrator | Add known partitions to the list of available block devices ------------- 1.23s 2025-08-29 14:49:50.521849 | orchestrator | Print LVM report data --------------------------------------------------- 1.03s 2025-08-29 14:49:50.521873 | orchestrator | Add known links to the list of available block devices ------------------ 0.91s 2025-08-29 14:49:50.521893 | orchestrator | Add known partitions to the list of available block devices ------------- 0.87s 2025-08-29 14:49:50.521913 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.82s 2025-08-29 14:49:50.521932 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.81s 2025-08-29 14:49:50.521949 | orchestrator | Get initial list of available block devices ----------------------------- 0.75s 2025-08-29 14:49:50.521963 | orchestrator | Create dict of block VGs -> PVs from ceph_osd_devices ------------------- 0.71s 2025-08-29 14:49:50.521981 | orchestrator | Print size needed for LVs on ceph_db_devices ---------------------------- 0.71s 2025-08-29 14:49:50.522001 | orchestrator | Add known partitions to the list of available block devices ------------- 0.70s 2025-08-29 14:49:50.522073 | orchestrator | Print 'Create WAL LVs for ceph_wal_devices' ----------------------------- 0.70s 2025-08-29 14:50:02.747769 | orchestrator | 2025-08-29 14:50:02 | INFO  | Task cabb56ca-cb67-4992-b54a-02286501d3ff (facts) was prepared for execution. 2025-08-29 14:50:02.747885 | orchestrator | 2025-08-29 14:50:02 | INFO  | It takes a moment until task cabb56ca-cb67-4992-b54a-02286501d3ff (facts) has been started and output is visible here. 2025-08-29 14:50:15.398426 | orchestrator | 2025-08-29 14:50:15.398565 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-08-29 14:50:15.398583 | orchestrator | 2025-08-29 14:50:15.398595 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-08-29 14:50:15.398607 | orchestrator | Friday 29 August 2025 14:50:06 +0000 (0:00:00.252) 0:00:00.252 ********* 2025-08-29 14:50:15.398618 | orchestrator | ok: [testbed-manager] 2025-08-29 14:50:15.398631 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:50:15.398676 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:50:15.398687 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:50:15.398698 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:50:15.398708 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:50:15.398719 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:50:15.398729 | orchestrator | 2025-08-29 14:50:15.398741 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-08-29 14:50:15.398751 | orchestrator | Friday 29 August 2025 14:50:07 +0000 (0:00:01.223) 0:00:01.476 ********* 2025-08-29 14:50:15.398779 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:50:15.398791 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:50:15.398803 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:50:15.398813 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:50:15.398824 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:50:15.398834 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:50:15.398845 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:50:15.398855 | orchestrator | 2025-08-29 14:50:15.398866 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-08-29 14:50:15.398878 | orchestrator | 2025-08-29 14:50:15.398890 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-08-29 14:50:15.398902 | orchestrator | Friday 29 August 2025 14:50:08 +0000 (0:00:01.178) 0:00:02.654 ********* 2025-08-29 14:50:15.398915 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:50:15.398927 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:50:15.398938 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:50:15.398950 | orchestrator | ok: [testbed-manager] 2025-08-29 14:50:15.398962 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:50:15.398974 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:50:15.398986 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:50:15.398998 | orchestrator | 2025-08-29 14:50:15.399011 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-08-29 14:50:15.399023 | orchestrator | 2025-08-29 14:50:15.399035 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-08-29 14:50:15.399047 | orchestrator | Friday 29 August 2025 14:50:14 +0000 (0:00:05.631) 0:00:08.285 ********* 2025-08-29 14:50:15.399059 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:50:15.399071 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:50:15.399083 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:50:15.399095 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:50:15.399107 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:50:15.399118 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:50:15.399129 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:50:15.399141 | orchestrator | 2025-08-29 14:50:15.399152 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:50:15.399165 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:50:15.399179 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:50:15.399191 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:50:15.399202 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:50:15.399214 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:50:15.399226 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:50:15.399239 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:50:15.399260 | orchestrator | 2025-08-29 14:50:15.399271 | orchestrator | 2025-08-29 14:50:15.399281 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:50:15.399292 | orchestrator | Friday 29 August 2025 14:50:15 +0000 (0:00:00.464) 0:00:08.750 ********* 2025-08-29 14:50:15.399303 | orchestrator | =============================================================================== 2025-08-29 14:50:15.399335 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.63s 2025-08-29 14:50:15.399346 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.22s 2025-08-29 14:50:15.399356 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.18s 2025-08-29 14:50:15.399367 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.47s 2025-08-29 14:50:27.515866 | orchestrator | 2025-08-29 14:50:27 | INFO  | Task ad204f7d-b1f0-45cd-83f0-0960a1928212 (frr) was prepared for execution. 2025-08-29 14:50:27.516006 | orchestrator | 2025-08-29 14:50:27 | INFO  | It takes a moment until task ad204f7d-b1f0-45cd-83f0-0960a1928212 (frr) has been started and output is visible here. 2025-08-29 14:50:53.811886 | orchestrator | 2025-08-29 14:50:53.811985 | orchestrator | PLAY [Apply role frr] ********************************************************** 2025-08-29 14:50:53.812003 | orchestrator | 2025-08-29 14:50:53.812016 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2025-08-29 14:50:53.812028 | orchestrator | Friday 29 August 2025 14:50:32 +0000 (0:00:00.280) 0:00:00.280 ********* 2025-08-29 14:50:53.812039 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2025-08-29 14:50:53.812051 | orchestrator | 2025-08-29 14:50:53.812063 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2025-08-29 14:50:53.812073 | orchestrator | Friday 29 August 2025 14:50:32 +0000 (0:00:00.260) 0:00:00.541 ********* 2025-08-29 14:50:53.812084 | orchestrator | changed: [testbed-manager] 2025-08-29 14:50:53.812096 | orchestrator | 2025-08-29 14:50:53.812107 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2025-08-29 14:50:53.812118 | orchestrator | Friday 29 August 2025 14:50:33 +0000 (0:00:01.096) 0:00:01.638 ********* 2025-08-29 14:50:53.812128 | orchestrator | changed: [testbed-manager] 2025-08-29 14:50:53.812139 | orchestrator | 2025-08-29 14:50:53.812150 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2025-08-29 14:50:53.812161 | orchestrator | Friday 29 August 2025 14:50:42 +0000 (0:00:09.043) 0:00:10.681 ********* 2025-08-29 14:50:53.812172 | orchestrator | ok: [testbed-manager] 2025-08-29 14:50:53.812183 | orchestrator | 2025-08-29 14:50:53.812194 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2025-08-29 14:50:53.812204 | orchestrator | Friday 29 August 2025 14:50:43 +0000 (0:00:01.123) 0:00:11.805 ********* 2025-08-29 14:50:53.812215 | orchestrator | changed: [testbed-manager] 2025-08-29 14:50:53.812225 | orchestrator | 2025-08-29 14:50:53.812236 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2025-08-29 14:50:53.812247 | orchestrator | Friday 29 August 2025 14:50:44 +0000 (0:00:00.829) 0:00:12.634 ********* 2025-08-29 14:50:53.812257 | orchestrator | ok: [testbed-manager] 2025-08-29 14:50:53.812268 | orchestrator | 2025-08-29 14:50:53.812339 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2025-08-29 14:50:53.812356 | orchestrator | Friday 29 August 2025 14:50:45 +0000 (0:00:01.096) 0:00:13.731 ********* 2025-08-29 14:50:53.812367 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 14:50:53.812378 | orchestrator | 2025-08-29 14:50:53.812389 | orchestrator | TASK [osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf] *** 2025-08-29 14:50:53.812399 | orchestrator | Friday 29 August 2025 14:50:46 +0000 (0:00:00.743) 0:00:14.475 ********* 2025-08-29 14:50:53.812410 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:50:53.812421 | orchestrator | 2025-08-29 14:50:53.812432 | orchestrator | TASK [osism.services.frr : Copy file from the role: /etc/frr/frr.conf] ********* 2025-08-29 14:50:53.812464 | orchestrator | Friday 29 August 2025 14:50:46 +0000 (0:00:00.194) 0:00:14.669 ********* 2025-08-29 14:50:53.812477 | orchestrator | changed: [testbed-manager] 2025-08-29 14:50:53.812490 | orchestrator | 2025-08-29 14:50:53.812502 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2025-08-29 14:50:53.812514 | orchestrator | Friday 29 August 2025 14:50:47 +0000 (0:00:01.039) 0:00:15.708 ********* 2025-08-29 14:50:53.812527 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2025-08-29 14:50:53.812539 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2025-08-29 14:50:53.812552 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2025-08-29 14:50:53.812564 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2025-08-29 14:50:53.812576 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2025-08-29 14:50:53.812588 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2025-08-29 14:50:53.812600 | orchestrator | 2025-08-29 14:50:53.812612 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2025-08-29 14:50:53.812624 | orchestrator | Friday 29 August 2025 14:50:50 +0000 (0:00:03.261) 0:00:18.969 ********* 2025-08-29 14:50:53.812636 | orchestrator | ok: [testbed-manager] 2025-08-29 14:50:53.812648 | orchestrator | 2025-08-29 14:50:53.812660 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2025-08-29 14:50:53.812673 | orchestrator | Friday 29 August 2025 14:50:52 +0000 (0:00:01.430) 0:00:20.400 ********* 2025-08-29 14:50:53.812684 | orchestrator | changed: [testbed-manager] 2025-08-29 14:50:53.812696 | orchestrator | 2025-08-29 14:50:53.812708 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:50:53.812720 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 14:50:53.812732 | orchestrator | 2025-08-29 14:50:53.812744 | orchestrator | 2025-08-29 14:50:53.812756 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:50:53.812768 | orchestrator | Friday 29 August 2025 14:50:53 +0000 (0:00:01.386) 0:00:21.786 ********* 2025-08-29 14:50:53.812780 | orchestrator | =============================================================================== 2025-08-29 14:50:53.812791 | orchestrator | osism.services.frr : Install frr package -------------------------------- 9.04s 2025-08-29 14:50:53.812804 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 3.26s 2025-08-29 14:50:53.812816 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.43s 2025-08-29 14:50:53.812826 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.39s 2025-08-29 14:50:53.812854 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.12s 2025-08-29 14:50:53.812865 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.10s 2025-08-29 14:50:53.812876 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.10s 2025-08-29 14:50:53.812886 | orchestrator | osism.services.frr : Copy file from the role: /etc/frr/frr.conf --------- 1.04s 2025-08-29 14:50:53.812897 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.83s 2025-08-29 14:50:53.812907 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.74s 2025-08-29 14:50:53.812918 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.26s 2025-08-29 14:50:53.812929 | orchestrator | osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf --- 0.19s 2025-08-29 14:50:54.016254 | orchestrator | 2025-08-29 14:50:54.018444 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Fri Aug 29 14:50:54 UTC 2025 2025-08-29 14:50:54.018535 | orchestrator | 2025-08-29 14:50:55.760589 | orchestrator | 2025-08-29 14:50:55 | INFO  | Collection nutshell is prepared for execution 2025-08-29 14:50:55.760652 | orchestrator | 2025-08-29 14:50:55 | INFO  | D [0] - dotfiles 2025-08-29 14:51:05.953969 | orchestrator | 2025-08-29 14:51:05 | INFO  | D [0] - homer 2025-08-29 14:51:05.954168 | orchestrator | 2025-08-29 14:51:05 | INFO  | D [0] - netdata 2025-08-29 14:51:05.954188 | orchestrator | 2025-08-29 14:51:05 | INFO  | D [0] - openstackclient 2025-08-29 14:51:05.954212 | orchestrator | 2025-08-29 14:51:05 | INFO  | D [0] - phpmyadmin 2025-08-29 14:51:05.954223 | orchestrator | 2025-08-29 14:51:05 | INFO  | A [0] - common 2025-08-29 14:51:05.957806 | orchestrator | 2025-08-29 14:51:05 | INFO  | A [1] -- loadbalancer 2025-08-29 14:51:05.957924 | orchestrator | 2025-08-29 14:51:05 | INFO  | D [2] --- opensearch 2025-08-29 14:51:05.957947 | orchestrator | 2025-08-29 14:51:05 | INFO  | A [2] --- mariadb-ng 2025-08-29 14:51:05.958376 | orchestrator | 2025-08-29 14:51:05 | INFO  | D [3] ---- horizon 2025-08-29 14:51:05.958599 | orchestrator | 2025-08-29 14:51:05 | INFO  | A [3] ---- keystone 2025-08-29 14:51:05.959151 | orchestrator | 2025-08-29 14:51:05 | INFO  | A [4] ----- neutron 2025-08-29 14:51:05.959196 | orchestrator | 2025-08-29 14:51:05 | INFO  | D [5] ------ wait-for-nova 2025-08-29 14:51:05.959216 | orchestrator | 2025-08-29 14:51:05 | INFO  | A [5] ------ octavia 2025-08-29 14:51:05.961048 | orchestrator | 2025-08-29 14:51:05 | INFO  | D [4] ----- barbican 2025-08-29 14:51:05.961099 | orchestrator | 2025-08-29 14:51:05 | INFO  | D [4] ----- designate 2025-08-29 14:51:05.961119 | orchestrator | 2025-08-29 14:51:05 | INFO  | D [4] ----- ironic 2025-08-29 14:51:05.961138 | orchestrator | 2025-08-29 14:51:05 | INFO  | D [4] ----- placement 2025-08-29 14:51:05.961504 | orchestrator | 2025-08-29 14:51:05 | INFO  | D [4] ----- magnum 2025-08-29 14:51:05.962217 | orchestrator | 2025-08-29 14:51:05 | INFO  | A [1] -- openvswitch 2025-08-29 14:51:05.962249 | orchestrator | 2025-08-29 14:51:05 | INFO  | D [2] --- ovn 2025-08-29 14:51:05.962261 | orchestrator | 2025-08-29 14:51:05 | INFO  | D [1] -- memcached 2025-08-29 14:51:05.962582 | orchestrator | 2025-08-29 14:51:05 | INFO  | D [1] -- redis 2025-08-29 14:51:05.962606 | orchestrator | 2025-08-29 14:51:05 | INFO  | D [1] -- rabbitmq-ng 2025-08-29 14:51:05.963127 | orchestrator | 2025-08-29 14:51:05 | INFO  | A [0] - kubernetes 2025-08-29 14:51:05.965674 | orchestrator | 2025-08-29 14:51:05 | INFO  | D [1] -- kubeconfig 2025-08-29 14:51:05.965835 | orchestrator | 2025-08-29 14:51:05 | INFO  | A [1] -- copy-kubeconfig 2025-08-29 14:51:05.965872 | orchestrator | 2025-08-29 14:51:05 | INFO  | A [0] - ceph 2025-08-29 14:51:05.968658 | orchestrator | 2025-08-29 14:51:05 | INFO  | A [1] -- ceph-pools 2025-08-29 14:51:05.968723 | orchestrator | 2025-08-29 14:51:05 | INFO  | A [2] --- copy-ceph-keys 2025-08-29 14:51:05.968738 | orchestrator | 2025-08-29 14:51:05 | INFO  | A [3] ---- cephclient 2025-08-29 14:51:05.968750 | orchestrator | 2025-08-29 14:51:05 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-08-29 14:51:05.969058 | orchestrator | 2025-08-29 14:51:05 | INFO  | A [4] ----- wait-for-keystone 2025-08-29 14:51:05.969203 | orchestrator | 2025-08-29 14:51:05 | INFO  | D [5] ------ kolla-ceph-rgw 2025-08-29 14:51:05.969232 | orchestrator | 2025-08-29 14:51:05 | INFO  | D [5] ------ glance 2025-08-29 14:51:05.969244 | orchestrator | 2025-08-29 14:51:05 | INFO  | D [5] ------ cinder 2025-08-29 14:51:05.969260 | orchestrator | 2025-08-29 14:51:05 | INFO  | D [5] ------ nova 2025-08-29 14:51:05.969593 | orchestrator | 2025-08-29 14:51:05 | INFO  | A [4] ----- prometheus 2025-08-29 14:51:05.969740 | orchestrator | 2025-08-29 14:51:05 | INFO  | D [5] ------ grafana 2025-08-29 14:51:06.185707 | orchestrator | 2025-08-29 14:51:06 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-08-29 14:51:06.187699 | orchestrator | 2025-08-29 14:51:06 | INFO  | Tasks are running in the background 2025-08-29 14:51:09.258507 | orchestrator | 2025-08-29 14:51:09 | INFO  | No task IDs specified, wait for all currently running tasks 2025-08-29 14:51:11.394928 | orchestrator | 2025-08-29 14:51:11 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:51:11.395606 | orchestrator | 2025-08-29 14:51:11 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:51:11.396470 | orchestrator | 2025-08-29 14:51:11 | INFO  | Task ba1f7032-c16f-400c-9fa6-41ed465ac715 is in state STARTED 2025-08-29 14:51:11.397394 | orchestrator | 2025-08-29 14:51:11 | INFO  | Task ab2d48aa-f4d8-4328-b464-ce38c1b2a7cb is in state STARTED 2025-08-29 14:51:11.398676 | orchestrator | 2025-08-29 14:51:11 | INFO  | Task a38a0f58-cb24-45c4-905f-3efe3e8d7845 is in state STARTED 2025-08-29 14:51:11.400823 | orchestrator | 2025-08-29 14:51:11 | INFO  | Task 656b20ea-77dd-4500-9eb3-b9bc519a8c8c is in state STARTED 2025-08-29 14:51:11.401676 | orchestrator | 2025-08-29 14:51:11 | INFO  | Task 5d259f29-0397-40ea-9449-80860f1086f2 is in state STARTED 2025-08-29 14:51:11.406123 | orchestrator | 2025-08-29 14:51:11 | INFO  | Task 190a660a-f71b-4a60-a5d0-12d04324078b is in state STARTED 2025-08-29 14:51:11.406180 | orchestrator | 2025-08-29 14:51:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:14.483827 | orchestrator | 2025-08-29 14:51:14 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:51:14.484870 | orchestrator | 2025-08-29 14:51:14 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:51:14.486666 | orchestrator | 2025-08-29 14:51:14 | INFO  | Task ba1f7032-c16f-400c-9fa6-41ed465ac715 is in state STARTED 2025-08-29 14:51:14.489844 | orchestrator | 2025-08-29 14:51:14 | INFO  | Task ab2d48aa-f4d8-4328-b464-ce38c1b2a7cb is in state STARTED 2025-08-29 14:51:14.490813 | orchestrator | 2025-08-29 14:51:14 | INFO  | Task a38a0f58-cb24-45c4-905f-3efe3e8d7845 is in state STARTED 2025-08-29 14:51:14.492940 | orchestrator | 2025-08-29 14:51:14 | INFO  | Task 656b20ea-77dd-4500-9eb3-b9bc519a8c8c is in state STARTED 2025-08-29 14:51:14.493969 | orchestrator | 2025-08-29 14:51:14 | INFO  | Task 5d259f29-0397-40ea-9449-80860f1086f2 is in state STARTED 2025-08-29 14:51:14.494611 | orchestrator | 2025-08-29 14:51:14 | INFO  | Task 190a660a-f71b-4a60-a5d0-12d04324078b is in state STARTED 2025-08-29 14:51:14.494631 | orchestrator | 2025-08-29 14:51:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:17.574858 | orchestrator | 2025-08-29 14:51:17 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:51:17.574921 | orchestrator | 2025-08-29 14:51:17 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:51:17.575366 | orchestrator | 2025-08-29 14:51:17 | INFO  | Task ba1f7032-c16f-400c-9fa6-41ed465ac715 is in state STARTED 2025-08-29 14:51:17.575837 | orchestrator | 2025-08-29 14:51:17 | INFO  | Task ab2d48aa-f4d8-4328-b464-ce38c1b2a7cb is in state STARTED 2025-08-29 14:51:17.578636 | orchestrator | 2025-08-29 14:51:17 | INFO  | Task a38a0f58-cb24-45c4-905f-3efe3e8d7845 is in state STARTED 2025-08-29 14:51:17.579223 | orchestrator | 2025-08-29 14:51:17 | INFO  | Task 656b20ea-77dd-4500-9eb3-b9bc519a8c8c is in state STARTED 2025-08-29 14:51:17.582404 | orchestrator | 2025-08-29 14:51:17 | INFO  | Task 5d259f29-0397-40ea-9449-80860f1086f2 is in state STARTED 2025-08-29 14:51:17.582649 | orchestrator | 2025-08-29 14:51:17 | INFO  | Task 190a660a-f71b-4a60-a5d0-12d04324078b is in state STARTED 2025-08-29 14:51:17.582665 | orchestrator | 2025-08-29 14:51:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:20.624111 | orchestrator | 2025-08-29 14:51:20 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:51:20.624221 | orchestrator | 2025-08-29 14:51:20 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:51:20.624240 | orchestrator | 2025-08-29 14:51:20 | INFO  | Task ba1f7032-c16f-400c-9fa6-41ed465ac715 is in state STARTED 2025-08-29 14:51:20.625005 | orchestrator | 2025-08-29 14:51:20 | INFO  | Task ab2d48aa-f4d8-4328-b464-ce38c1b2a7cb is in state STARTED 2025-08-29 14:51:20.626098 | orchestrator | 2025-08-29 14:51:20 | INFO  | Task a38a0f58-cb24-45c4-905f-3efe3e8d7845 is in state STARTED 2025-08-29 14:51:20.627647 | orchestrator | 2025-08-29 14:51:20 | INFO  | Task 656b20ea-77dd-4500-9eb3-b9bc519a8c8c is in state STARTED 2025-08-29 14:51:20.628682 | orchestrator | 2025-08-29 14:51:20 | INFO  | Task 5d259f29-0397-40ea-9449-80860f1086f2 is in state STARTED 2025-08-29 14:51:20.629358 | orchestrator | 2025-08-29 14:51:20 | INFO  | Task 190a660a-f71b-4a60-a5d0-12d04324078b is in state STARTED 2025-08-29 14:51:20.629381 | orchestrator | 2025-08-29 14:51:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:24.023221 | orchestrator | 2025-08-29 14:51:24 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:51:24.023459 | orchestrator | 2025-08-29 14:51:24 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:51:24.023477 | orchestrator | 2025-08-29 14:51:24 | INFO  | Task ba1f7032-c16f-400c-9fa6-41ed465ac715 is in state STARTED 2025-08-29 14:51:24.023492 | orchestrator | 2025-08-29 14:51:24 | INFO  | Task ab2d48aa-f4d8-4328-b464-ce38c1b2a7cb is in state STARTED 2025-08-29 14:51:24.023505 | orchestrator | 2025-08-29 14:51:24 | INFO  | Task a38a0f58-cb24-45c4-905f-3efe3e8d7845 is in state STARTED 2025-08-29 14:51:24.023545 | orchestrator | 2025-08-29 14:51:24 | INFO  | Task 656b20ea-77dd-4500-9eb3-b9bc519a8c8c is in state STARTED 2025-08-29 14:51:24.023559 | orchestrator | 2025-08-29 14:51:24 | INFO  | Task 5d259f29-0397-40ea-9449-80860f1086f2 is in state STARTED 2025-08-29 14:51:24.023571 | orchestrator | 2025-08-29 14:51:24 | INFO  | Task 190a660a-f71b-4a60-a5d0-12d04324078b is in state STARTED 2025-08-29 14:51:24.023584 | orchestrator | 2025-08-29 14:51:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:27.042826 | orchestrator | 2025-08-29 14:51:27 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:51:27.042887 | orchestrator | 2025-08-29 14:51:27 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:51:27.042895 | orchestrator | 2025-08-29 14:51:27 | INFO  | Task ba1f7032-c16f-400c-9fa6-41ed465ac715 is in state STARTED 2025-08-29 14:51:27.042902 | orchestrator | 2025-08-29 14:51:27 | INFO  | Task ab2d48aa-f4d8-4328-b464-ce38c1b2a7cb is in state STARTED 2025-08-29 14:51:27.042909 | orchestrator | 2025-08-29 14:51:27 | INFO  | Task a38a0f58-cb24-45c4-905f-3efe3e8d7845 is in state STARTED 2025-08-29 14:51:27.043838 | orchestrator | 2025-08-29 14:51:27 | INFO  | Task 656b20ea-77dd-4500-9eb3-b9bc519a8c8c is in state STARTED 2025-08-29 14:51:27.044594 | orchestrator | 2025-08-29 14:51:27 | INFO  | Task 5d259f29-0397-40ea-9449-80860f1086f2 is in state STARTED 2025-08-29 14:51:27.045216 | orchestrator | 2025-08-29 14:51:27 | INFO  | Task 190a660a-f71b-4a60-a5d0-12d04324078b is in state STARTED 2025-08-29 14:51:27.045237 | orchestrator | 2025-08-29 14:51:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:30.106616 | orchestrator | 2025-08-29 14:51:30 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:51:30.108575 | orchestrator | 2025-08-29 14:51:30 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:51:30.116223 | orchestrator | 2025-08-29 14:51:30 | INFO  | Task ba1f7032-c16f-400c-9fa6-41ed465ac715 is in state STARTED 2025-08-29 14:51:30.118431 | orchestrator | 2025-08-29 14:51:30 | INFO  | Task ab2d48aa-f4d8-4328-b464-ce38c1b2a7cb is in state STARTED 2025-08-29 14:51:30.122368 | orchestrator | 2025-08-29 14:51:30 | INFO  | Task a38a0f58-cb24-45c4-905f-3efe3e8d7845 is in state STARTED 2025-08-29 14:51:30.125267 | orchestrator | 2025-08-29 14:51:30 | INFO  | Task 656b20ea-77dd-4500-9eb3-b9bc519a8c8c is in state STARTED 2025-08-29 14:51:30.126244 | orchestrator | 2025-08-29 14:51:30 | INFO  | Task 5d259f29-0397-40ea-9449-80860f1086f2 is in state STARTED 2025-08-29 14:51:30.129652 | orchestrator | 2025-08-29 14:51:30 | INFO  | Task 190a660a-f71b-4a60-a5d0-12d04324078b is in state STARTED 2025-08-29 14:51:30.129706 | orchestrator | 2025-08-29 14:51:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:33.522235 | orchestrator | 2025-08-29 14:51:33 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:51:33.522359 | orchestrator | 2025-08-29 14:51:33 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:51:33.522370 | orchestrator | 2025-08-29 14:51:33 | INFO  | Task ba1f7032-c16f-400c-9fa6-41ed465ac715 is in state STARTED 2025-08-29 14:51:33.522377 | orchestrator | 2025-08-29 14:51:33 | INFO  | Task ab2d48aa-f4d8-4328-b464-ce38c1b2a7cb is in state STARTED 2025-08-29 14:51:33.522384 | orchestrator | 2025-08-29 14:51:33 | INFO  | Task a38a0f58-cb24-45c4-905f-3efe3e8d7845 is in state STARTED 2025-08-29 14:51:33.522390 | orchestrator | 2025-08-29 14:51:33 | INFO  | Task 656b20ea-77dd-4500-9eb3-b9bc519a8c8c is in state STARTED 2025-08-29 14:51:33.522396 | orchestrator | 2025-08-29 14:51:33 | INFO  | Task 5d259f29-0397-40ea-9449-80860f1086f2 is in state STARTED 2025-08-29 14:51:33.522403 | orchestrator | 2025-08-29 14:51:33 | INFO  | Task 190a660a-f71b-4a60-a5d0-12d04324078b is in state STARTED 2025-08-29 14:51:33.522409 | orchestrator | 2025-08-29 14:51:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:36.554595 | orchestrator | 2025-08-29 14:51:36 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:51:36.565931 | orchestrator | 2025-08-29 14:51:36 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:51:36.571708 | orchestrator | 2025-08-29 14:51:36 | INFO  | Task ba1f7032-c16f-400c-9fa6-41ed465ac715 is in state STARTED 2025-08-29 14:51:36.577938 | orchestrator | 2025-08-29 14:51:36 | INFO  | Task ab2d48aa-f4d8-4328-b464-ce38c1b2a7cb is in state STARTED 2025-08-29 14:51:36.584594 | orchestrator | 2025-08-29 14:51:36 | INFO  | Task a38a0f58-cb24-45c4-905f-3efe3e8d7845 is in state STARTED 2025-08-29 14:51:36.603130 | orchestrator | 2025-08-29 14:51:36 | INFO  | Task 656b20ea-77dd-4500-9eb3-b9bc519a8c8c is in state STARTED 2025-08-29 14:51:36.605839 | orchestrator | 2025-08-29 14:51:36 | INFO  | Task 5d259f29-0397-40ea-9449-80860f1086f2 is in state SUCCESS 2025-08-29 14:51:36.607805 | orchestrator | 2025-08-29 14:51:36.607863 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-08-29 14:51:36.607873 | orchestrator | 2025-08-29 14:51:36.607880 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-08-29 14:51:36.607886 | orchestrator | Friday 29 August 2025 14:51:20 +0000 (0:00:01.024) 0:00:01.024 ********* 2025-08-29 14:51:36.607892 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:51:36.607900 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:51:36.607906 | orchestrator | changed: [testbed-manager] 2025-08-29 14:51:36.607913 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:51:36.607919 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:51:36.607925 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:51:36.607930 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:51:36.607936 | orchestrator | 2025-08-29 14:51:36.607942 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-08-29 14:51:36.607948 | orchestrator | Friday 29 August 2025 14:51:24 +0000 (0:00:03.767) 0:00:04.792 ********* 2025-08-29 14:51:36.607956 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-08-29 14:51:36.607963 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-08-29 14:51:36.607969 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-08-29 14:51:36.607975 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-08-29 14:51:36.607981 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-08-29 14:51:36.607987 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-08-29 14:51:36.607993 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-08-29 14:51:36.607999 | orchestrator | 2025-08-29 14:51:36.608006 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-08-29 14:51:36.608013 | orchestrator | Friday 29 August 2025 14:51:26 +0000 (0:00:02.319) 0:00:07.112 ********* 2025-08-29 14:51:36.608023 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-08-29 14:51:25.267134', 'end': '2025-08-29 14:51:25.276606', 'delta': '0:00:00.009472', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-08-29 14:51:36.608032 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-08-29 14:51:25.269113', 'end': '2025-08-29 14:51:25.277138', 'delta': '0:00:00.008025', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-08-29 14:51:36.608039 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-08-29 14:51:25.355561', 'end': '2025-08-29 14:51:25.360967', 'delta': '0:00:00.005406', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-08-29 14:51:36.608235 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-08-29 14:51:25.514164', 'end': '2025-08-29 14:51:25.521631', 'delta': '0:00:00.007467', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-08-29 14:51:36.608249 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-08-29 14:51:25.902830', 'end': '2025-08-29 14:51:25.913712', 'delta': '0:00:00.010882', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-08-29 14:51:36.608256 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-08-29 14:51:26.169881', 'end': '2025-08-29 14:51:26.177079', 'delta': '0:00:00.007198', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-08-29 14:51:36.608263 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-08-29 14:51:26.374701', 'end': '2025-08-29 14:51:26.381268', 'delta': '0:00:00.006567', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-08-29 14:51:36.608269 | orchestrator | 2025-08-29 14:51:36.608276 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-08-29 14:51:36.608300 | orchestrator | Friday 29 August 2025 14:51:29 +0000 (0:00:02.429) 0:00:09.542 ********* 2025-08-29 14:51:36.608306 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-08-29 14:51:36.608318 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-08-29 14:51:36.608325 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-08-29 14:51:36.608331 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-08-29 14:51:36.608337 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-08-29 14:51:36.608344 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-08-29 14:51:36.608350 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-08-29 14:51:36.608357 | orchestrator | 2025-08-29 14:51:36.608363 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-08-29 14:51:36.608369 | orchestrator | Friday 29 August 2025 14:51:31 +0000 (0:00:02.911) 0:00:12.454 ********* 2025-08-29 14:51:36.608376 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-08-29 14:51:36.608382 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-08-29 14:51:36.608391 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-08-29 14:51:36.608398 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-08-29 14:51:36.608404 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-08-29 14:51:36.608411 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-08-29 14:51:36.608417 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-08-29 14:51:36.608424 | orchestrator | 2025-08-29 14:51:36.608430 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:51:36.608445 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:51:36.608453 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:51:36.608460 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:51:36.608467 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:51:36.608474 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:51:36.608481 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:51:36.608487 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:51:36.608494 | orchestrator | 2025-08-29 14:51:36.608501 | orchestrator | 2025-08-29 14:51:36.608507 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:51:36.608514 | orchestrator | Friday 29 August 2025 14:51:34 +0000 (0:00:02.462) 0:00:14.916 ********* 2025-08-29 14:51:36.608521 | orchestrator | =============================================================================== 2025-08-29 14:51:36.608528 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.77s 2025-08-29 14:51:36.608535 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.91s 2025-08-29 14:51:36.608541 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.46s 2025-08-29 14:51:36.608549 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.43s 2025-08-29 14:51:36.608556 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.32s 2025-08-29 14:51:36.613469 | orchestrator | 2025-08-29 14:51:36 | INFO  | Task 37781dc7-1eb6-40e0-8d7f-9a1b3e635880 is in state STARTED 2025-08-29 14:51:36.617519 | orchestrator | 2025-08-29 14:51:36 | INFO  | Task 190a660a-f71b-4a60-a5d0-12d04324078b is in state STARTED 2025-08-29 14:51:36.618344 | orchestrator | 2025-08-29 14:51:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:39.753067 | orchestrator | 2025-08-29 14:51:39 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:51:39.755524 | orchestrator | 2025-08-29 14:51:39 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:51:39.766689 | orchestrator | 2025-08-29 14:51:39 | INFO  | Task ba1f7032-c16f-400c-9fa6-41ed465ac715 is in state STARTED 2025-08-29 14:51:39.806942 | orchestrator | 2025-08-29 14:51:39 | INFO  | Task ab2d48aa-f4d8-4328-b464-ce38c1b2a7cb is in state STARTED 2025-08-29 14:51:39.828652 | orchestrator | 2025-08-29 14:51:39 | INFO  | Task a38a0f58-cb24-45c4-905f-3efe3e8d7845 is in state STARTED 2025-08-29 14:51:39.837856 | orchestrator | 2025-08-29 14:51:39 | INFO  | Task 656b20ea-77dd-4500-9eb3-b9bc519a8c8c is in state STARTED 2025-08-29 14:51:39.840640 | orchestrator | 2025-08-29 14:51:39 | INFO  | Task 37781dc7-1eb6-40e0-8d7f-9a1b3e635880 is in state STARTED 2025-08-29 14:51:39.841548 | orchestrator | 2025-08-29 14:51:39 | INFO  | Task 190a660a-f71b-4a60-a5d0-12d04324078b is in state STARTED 2025-08-29 14:51:39.842502 | orchestrator | 2025-08-29 14:51:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:42.909571 | orchestrator | 2025-08-29 14:51:42 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:51:42.912081 | orchestrator | 2025-08-29 14:51:42 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:51:42.912138 | orchestrator | 2025-08-29 14:51:42 | INFO  | Task ba1f7032-c16f-400c-9fa6-41ed465ac715 is in state STARTED 2025-08-29 14:51:42.912508 | orchestrator | 2025-08-29 14:51:42 | INFO  | Task ab2d48aa-f4d8-4328-b464-ce38c1b2a7cb is in state STARTED 2025-08-29 14:51:42.925676 | orchestrator | 2025-08-29 14:51:42 | INFO  | Task a38a0f58-cb24-45c4-905f-3efe3e8d7845 is in state STARTED 2025-08-29 14:51:42.925778 | orchestrator | 2025-08-29 14:51:42 | INFO  | Task 656b20ea-77dd-4500-9eb3-b9bc519a8c8c is in state STARTED 2025-08-29 14:51:42.925790 | orchestrator | 2025-08-29 14:51:42 | INFO  | Task 37781dc7-1eb6-40e0-8d7f-9a1b3e635880 is in state STARTED 2025-08-29 14:51:42.928308 | orchestrator | 2025-08-29 14:51:42 | INFO  | Task 190a660a-f71b-4a60-a5d0-12d04324078b is in state STARTED 2025-08-29 14:51:42.928402 | orchestrator | 2025-08-29 14:51:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:46.010902 | orchestrator | 2025-08-29 14:51:45 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:51:46.011036 | orchestrator | 2025-08-29 14:51:45 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:51:46.011052 | orchestrator | 2025-08-29 14:51:45 | INFO  | Task ba1f7032-c16f-400c-9fa6-41ed465ac715 is in state STARTED 2025-08-29 14:51:46.011063 | orchestrator | 2025-08-29 14:51:45 | INFO  | Task ab2d48aa-f4d8-4328-b464-ce38c1b2a7cb is in state STARTED 2025-08-29 14:51:46.011075 | orchestrator | 2025-08-29 14:51:45 | INFO  | Task a38a0f58-cb24-45c4-905f-3efe3e8d7845 is in state STARTED 2025-08-29 14:51:46.011086 | orchestrator | 2025-08-29 14:51:45 | INFO  | Task 656b20ea-77dd-4500-9eb3-b9bc519a8c8c is in state STARTED 2025-08-29 14:51:46.011097 | orchestrator | 2025-08-29 14:51:46 | INFO  | Task 37781dc7-1eb6-40e0-8d7f-9a1b3e635880 is in state STARTED 2025-08-29 14:51:46.011108 | orchestrator | 2025-08-29 14:51:46 | INFO  | Task 190a660a-f71b-4a60-a5d0-12d04324078b is in state STARTED 2025-08-29 14:51:46.011120 | orchestrator | 2025-08-29 14:51:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:49.049534 | orchestrator | 2025-08-29 14:51:49 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:51:49.050357 | orchestrator | 2025-08-29 14:51:49 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:51:49.050981 | orchestrator | 2025-08-29 14:51:49 | INFO  | Task ba1f7032-c16f-400c-9fa6-41ed465ac715 is in state STARTED 2025-08-29 14:51:49.051834 | orchestrator | 2025-08-29 14:51:49 | INFO  | Task ab2d48aa-f4d8-4328-b464-ce38c1b2a7cb is in state STARTED 2025-08-29 14:51:49.052651 | orchestrator | 2025-08-29 14:51:49 | INFO  | Task a38a0f58-cb24-45c4-905f-3efe3e8d7845 is in state STARTED 2025-08-29 14:51:49.053061 | orchestrator | 2025-08-29 14:51:49 | INFO  | Task 656b20ea-77dd-4500-9eb3-b9bc519a8c8c is in state STARTED 2025-08-29 14:51:49.054207 | orchestrator | 2025-08-29 14:51:49 | INFO  | Task 37781dc7-1eb6-40e0-8d7f-9a1b3e635880 is in state STARTED 2025-08-29 14:51:49.054263 | orchestrator | 2025-08-29 14:51:49 | INFO  | Task 190a660a-f71b-4a60-a5d0-12d04324078b is in state STARTED 2025-08-29 14:51:49.054311 | orchestrator | 2025-08-29 14:51:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:52.087248 | orchestrator | 2025-08-29 14:51:52 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:51:52.091727 | orchestrator | 2025-08-29 14:51:52 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:51:52.094497 | orchestrator | 2025-08-29 14:51:52 | INFO  | Task ba1f7032-c16f-400c-9fa6-41ed465ac715 is in state STARTED 2025-08-29 14:51:52.096454 | orchestrator | 2025-08-29 14:51:52 | INFO  | Task ab2d48aa-f4d8-4328-b464-ce38c1b2a7cb is in state STARTED 2025-08-29 14:51:52.099597 | orchestrator | 2025-08-29 14:51:52 | INFO  | Task a38a0f58-cb24-45c4-905f-3efe3e8d7845 is in state STARTED 2025-08-29 14:51:52.101988 | orchestrator | 2025-08-29 14:51:52 | INFO  | Task 656b20ea-77dd-4500-9eb3-b9bc519a8c8c is in state STARTED 2025-08-29 14:51:52.102920 | orchestrator | 2025-08-29 14:51:52 | INFO  | Task 37781dc7-1eb6-40e0-8d7f-9a1b3e635880 is in state STARTED 2025-08-29 14:51:52.103803 | orchestrator | 2025-08-29 14:51:52 | INFO  | Task 190a660a-f71b-4a60-a5d0-12d04324078b is in state STARTED 2025-08-29 14:51:52.104169 | orchestrator | 2025-08-29 14:51:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:55.271367 | orchestrator | 2025-08-29 14:51:55 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:51:55.271461 | orchestrator | 2025-08-29 14:51:55 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:51:55.271471 | orchestrator | 2025-08-29 14:51:55 | INFO  | Task ba1f7032-c16f-400c-9fa6-41ed465ac715 is in state STARTED 2025-08-29 14:51:55.271478 | orchestrator | 2025-08-29 14:51:55 | INFO  | Task ab2d48aa-f4d8-4328-b464-ce38c1b2a7cb is in state STARTED 2025-08-29 14:51:55.271501 | orchestrator | 2025-08-29 14:51:55 | INFO  | Task a38a0f58-cb24-45c4-905f-3efe3e8d7845 is in state STARTED 2025-08-29 14:51:55.271509 | orchestrator | 2025-08-29 14:51:55 | INFO  | Task 656b20ea-77dd-4500-9eb3-b9bc519a8c8c is in state STARTED 2025-08-29 14:51:55.271515 | orchestrator | 2025-08-29 14:51:55 | INFO  | Task 37781dc7-1eb6-40e0-8d7f-9a1b3e635880 is in state STARTED 2025-08-29 14:51:55.271521 | orchestrator | 2025-08-29 14:51:55 | INFO  | Task 190a660a-f71b-4a60-a5d0-12d04324078b is in state STARTED 2025-08-29 14:51:55.271528 | orchestrator | 2025-08-29 14:51:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:51:58.219615 | orchestrator | 2025-08-29 14:51:58 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:51:58.221913 | orchestrator | 2025-08-29 14:51:58 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:51:58.222051 | orchestrator | 2025-08-29 14:51:58 | INFO  | Task ba1f7032-c16f-400c-9fa6-41ed465ac715 is in state STARTED 2025-08-29 14:51:58.222066 | orchestrator | 2025-08-29 14:51:58 | INFO  | Task ab2d48aa-f4d8-4328-b464-ce38c1b2a7cb is in state STARTED 2025-08-29 14:51:58.222073 | orchestrator | 2025-08-29 14:51:58 | INFO  | Task a38a0f58-cb24-45c4-905f-3efe3e8d7845 is in state STARTED 2025-08-29 14:51:58.227010 | orchestrator | 2025-08-29 14:51:58 | INFO  | Task 656b20ea-77dd-4500-9eb3-b9bc519a8c8c is in state STARTED 2025-08-29 14:51:58.234464 | orchestrator | 2025-08-29 14:51:58 | INFO  | Task 37781dc7-1eb6-40e0-8d7f-9a1b3e635880 is in state STARTED 2025-08-29 14:51:58.234538 | orchestrator | 2025-08-29 14:51:58 | INFO  | Task 190a660a-f71b-4a60-a5d0-12d04324078b is in state STARTED 2025-08-29 14:51:58.234545 | orchestrator | 2025-08-29 14:51:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:01.369334 | orchestrator | 2025-08-29 14:52:01 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:52:01.372062 | orchestrator | 2025-08-29 14:52:01 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:52:01.372584 | orchestrator | 2025-08-29 14:52:01 | INFO  | Task ba1f7032-c16f-400c-9fa6-41ed465ac715 is in state SUCCESS 2025-08-29 14:52:01.374511 | orchestrator | 2025-08-29 14:52:01 | INFO  | Task ab2d48aa-f4d8-4328-b464-ce38c1b2a7cb is in state STARTED 2025-08-29 14:52:01.376521 | orchestrator | 2025-08-29 14:52:01 | INFO  | Task a38a0f58-cb24-45c4-905f-3efe3e8d7845 is in state STARTED 2025-08-29 14:52:01.378006 | orchestrator | 2025-08-29 14:52:01 | INFO  | Task 656b20ea-77dd-4500-9eb3-b9bc519a8c8c is in state STARTED 2025-08-29 14:52:01.384905 | orchestrator | 2025-08-29 14:52:01 | INFO  | Task 37781dc7-1eb6-40e0-8d7f-9a1b3e635880 is in state STARTED 2025-08-29 14:52:01.391701 | orchestrator | 2025-08-29 14:52:01 | INFO  | Task 190a660a-f71b-4a60-a5d0-12d04324078b is in state STARTED 2025-08-29 14:52:01.391780 | orchestrator | 2025-08-29 14:52:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:04.486512 | orchestrator | 2025-08-29 14:52:04 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:52:04.486582 | orchestrator | 2025-08-29 14:52:04 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:52:04.486589 | orchestrator | 2025-08-29 14:52:04 | INFO  | Task ab2d48aa-f4d8-4328-b464-ce38c1b2a7cb is in state STARTED 2025-08-29 14:52:04.516619 | orchestrator | 2025-08-29 14:52:04 | INFO  | Task a38a0f58-cb24-45c4-905f-3efe3e8d7845 is in state STARTED 2025-08-29 14:52:04.516689 | orchestrator | 2025-08-29 14:52:04 | INFO  | Task 656b20ea-77dd-4500-9eb3-b9bc519a8c8c is in state STARTED 2025-08-29 14:52:04.516694 | orchestrator | 2025-08-29 14:52:04 | INFO  | Task 37781dc7-1eb6-40e0-8d7f-9a1b3e635880 is in state STARTED 2025-08-29 14:52:04.516698 | orchestrator | 2025-08-29 14:52:04 | INFO  | Task 190a660a-f71b-4a60-a5d0-12d04324078b is in state STARTED 2025-08-29 14:52:04.516703 | orchestrator | 2025-08-29 14:52:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:07.704842 | orchestrator | 2025-08-29 14:52:07 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:52:07.704930 | orchestrator | 2025-08-29 14:52:07 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:52:07.704941 | orchestrator | 2025-08-29 14:52:07 | INFO  | Task ab2d48aa-f4d8-4328-b464-ce38c1b2a7cb is in state STARTED 2025-08-29 14:52:07.704948 | orchestrator | 2025-08-29 14:52:07 | INFO  | Task a38a0f58-cb24-45c4-905f-3efe3e8d7845 is in state STARTED 2025-08-29 14:52:07.704993 | orchestrator | 2025-08-29 14:52:07 | INFO  | Task 656b20ea-77dd-4500-9eb3-b9bc519a8c8c is in state STARTED 2025-08-29 14:52:07.705001 | orchestrator | 2025-08-29 14:52:07 | INFO  | Task 37781dc7-1eb6-40e0-8d7f-9a1b3e635880 is in state STARTED 2025-08-29 14:52:07.705007 | orchestrator | 2025-08-29 14:52:07 | INFO  | Task 190a660a-f71b-4a60-a5d0-12d04324078b is in state STARTED 2025-08-29 14:52:07.705014 | orchestrator | 2025-08-29 14:52:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:10.584820 | orchestrator | 2025-08-29 14:52:10 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:52:10.589932 | orchestrator | 2025-08-29 14:52:10 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:52:10.593816 | orchestrator | 2025-08-29 14:52:10 | INFO  | Task ab2d48aa-f4d8-4328-b464-ce38c1b2a7cb is in state STARTED 2025-08-29 14:52:10.597753 | orchestrator | 2025-08-29 14:52:10 | INFO  | Task a38a0f58-cb24-45c4-905f-3efe3e8d7845 is in state STARTED 2025-08-29 14:52:10.601698 | orchestrator | 2025-08-29 14:52:10 | INFO  | Task 656b20ea-77dd-4500-9eb3-b9bc519a8c8c is in state STARTED 2025-08-29 14:52:10.605408 | orchestrator | 2025-08-29 14:52:10 | INFO  | Task 37781dc7-1eb6-40e0-8d7f-9a1b3e635880 is in state STARTED 2025-08-29 14:52:10.606175 | orchestrator | 2025-08-29 14:52:10 | INFO  | Task 190a660a-f71b-4a60-a5d0-12d04324078b is in state STARTED 2025-08-29 14:52:10.606218 | orchestrator | 2025-08-29 14:52:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:13.703397 | orchestrator | 2025-08-29 14:52:13 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:52:13.703516 | orchestrator | 2025-08-29 14:52:13 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:52:13.705438 | orchestrator | 2025-08-29 14:52:13 | INFO  | Task ab2d48aa-f4d8-4328-b464-ce38c1b2a7cb is in state STARTED 2025-08-29 14:52:13.707183 | orchestrator | 2025-08-29 14:52:13 | INFO  | Task a38a0f58-cb24-45c4-905f-3efe3e8d7845 is in state STARTED 2025-08-29 14:52:13.707196 | orchestrator | 2025-08-29 14:52:13 | INFO  | Task 656b20ea-77dd-4500-9eb3-b9bc519a8c8c is in state STARTED 2025-08-29 14:52:13.707201 | orchestrator | 2025-08-29 14:52:13 | INFO  | Task 37781dc7-1eb6-40e0-8d7f-9a1b3e635880 is in state STARTED 2025-08-29 14:52:13.708770 | orchestrator | 2025-08-29 14:52:13 | INFO  | Task 190a660a-f71b-4a60-a5d0-12d04324078b is in state STARTED 2025-08-29 14:52:13.708789 | orchestrator | 2025-08-29 14:52:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:16.761185 | orchestrator | 2025-08-29 14:52:16 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:52:16.761291 | orchestrator | 2025-08-29 14:52:16 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:52:16.761298 | orchestrator | 2025-08-29 14:52:16 | INFO  | Task ab2d48aa-f4d8-4328-b464-ce38c1b2a7cb is in state STARTED 2025-08-29 14:52:16.761302 | orchestrator | 2025-08-29 14:52:16 | INFO  | Task a38a0f58-cb24-45c4-905f-3efe3e8d7845 is in state STARTED 2025-08-29 14:52:16.764236 | orchestrator | 2025-08-29 14:52:16 | INFO  | Task 656b20ea-77dd-4500-9eb3-b9bc519a8c8c is in state SUCCESS 2025-08-29 14:52:16.764300 | orchestrator | 2025-08-29 14:52:16 | INFO  | Task 37781dc7-1eb6-40e0-8d7f-9a1b3e635880 is in state STARTED 2025-08-29 14:52:16.764820 | orchestrator | 2025-08-29 14:52:16 | INFO  | Task 190a660a-f71b-4a60-a5d0-12d04324078b is in state STARTED 2025-08-29 14:52:16.764829 | orchestrator | 2025-08-29 14:52:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:19.820787 | orchestrator | 2025-08-29 14:52:19 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:52:19.825453 | orchestrator | 2025-08-29 14:52:19 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:52:19.829154 | orchestrator | 2025-08-29 14:52:19 | INFO  | Task ab2d48aa-f4d8-4328-b464-ce38c1b2a7cb is in state STARTED 2025-08-29 14:52:19.832803 | orchestrator | 2025-08-29 14:52:19 | INFO  | Task a38a0f58-cb24-45c4-905f-3efe3e8d7845 is in state STARTED 2025-08-29 14:52:19.836406 | orchestrator | 2025-08-29 14:52:19 | INFO  | Task 37781dc7-1eb6-40e0-8d7f-9a1b3e635880 is in state STARTED 2025-08-29 14:52:19.839244 | orchestrator | 2025-08-29 14:52:19 | INFO  | Task 190a660a-f71b-4a60-a5d0-12d04324078b is in state STARTED 2025-08-29 14:52:19.839336 | orchestrator | 2025-08-29 14:52:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:22.922195 | orchestrator | 2025-08-29 14:52:22 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:52:22.928680 | orchestrator | 2025-08-29 14:52:22 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:52:22.931318 | orchestrator | 2025-08-29 14:52:22 | INFO  | Task ab2d48aa-f4d8-4328-b464-ce38c1b2a7cb is in state STARTED 2025-08-29 14:52:22.934747 | orchestrator | 2025-08-29 14:52:22 | INFO  | Task a38a0f58-cb24-45c4-905f-3efe3e8d7845 is in state STARTED 2025-08-29 14:52:22.941883 | orchestrator | 2025-08-29 14:52:22 | INFO  | Task 37781dc7-1eb6-40e0-8d7f-9a1b3e635880 is in state STARTED 2025-08-29 14:52:22.946246 | orchestrator | 2025-08-29 14:52:22 | INFO  | Task 190a660a-f71b-4a60-a5d0-12d04324078b is in state STARTED 2025-08-29 14:52:22.946907 | orchestrator | 2025-08-29 14:52:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:26.043481 | orchestrator | 2025-08-29 14:52:26 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:52:26.047933 | orchestrator | 2025-08-29 14:52:26 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:52:26.049878 | orchestrator | 2025-08-29 14:52:26 | INFO  | Task ab2d48aa-f4d8-4328-b464-ce38c1b2a7cb is in state SUCCESS 2025-08-29 14:52:26.050119 | orchestrator | 2025-08-29 14:52:26.050136 | orchestrator | 2025-08-29 14:52:26.050140 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-08-29 14:52:26.050145 | orchestrator | 2025-08-29 14:52:26.050149 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-08-29 14:52:26.050154 | orchestrator | Friday 29 August 2025 14:51:19 +0000 (0:00:00.713) 0:00:00.713 ********* 2025-08-29 14:52:26.050158 | orchestrator | ok: [testbed-manager] => { 2025-08-29 14:52:26.050163 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-08-29 14:52:26.050168 | orchestrator | } 2025-08-29 14:52:26.050172 | orchestrator | 2025-08-29 14:52:26.050175 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-08-29 14:52:26.050179 | orchestrator | Friday 29 August 2025 14:51:20 +0000 (0:00:00.305) 0:00:01.019 ********* 2025-08-29 14:52:26.050183 | orchestrator | ok: [testbed-manager] 2025-08-29 14:52:26.050187 | orchestrator | 2025-08-29 14:52:26.050191 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-08-29 14:52:26.050195 | orchestrator | Friday 29 August 2025 14:51:22 +0000 (0:00:02.056) 0:00:03.076 ********* 2025-08-29 14:52:26.050199 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-08-29 14:52:26.050202 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-08-29 14:52:26.050206 | orchestrator | 2025-08-29 14:52:26.050210 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-08-29 14:52:26.050232 | orchestrator | Friday 29 August 2025 14:51:23 +0000 (0:00:01.084) 0:00:04.160 ********* 2025-08-29 14:52:26.050239 | orchestrator | changed: [testbed-manager] 2025-08-29 14:52:26.050246 | orchestrator | 2025-08-29 14:52:26.050252 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-08-29 14:52:26.050258 | orchestrator | Friday 29 August 2025 14:51:25 +0000 (0:00:02.423) 0:00:06.584 ********* 2025-08-29 14:52:26.050277 | orchestrator | changed: [testbed-manager] 2025-08-29 14:52:26.050283 | orchestrator | 2025-08-29 14:52:26.050290 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-08-29 14:52:26.050297 | orchestrator | Friday 29 August 2025 14:51:27 +0000 (0:00:01.739) 0:00:08.323 ********* 2025-08-29 14:52:26.050304 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-08-29 14:52:26.050310 | orchestrator | ok: [testbed-manager] 2025-08-29 14:52:26.050317 | orchestrator | 2025-08-29 14:52:26.050322 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-08-29 14:52:26.050329 | orchestrator | Friday 29 August 2025 14:51:54 +0000 (0:00:26.835) 0:00:35.159 ********* 2025-08-29 14:52:26.050336 | orchestrator | changed: [testbed-manager] 2025-08-29 14:52:26.050342 | orchestrator | 2025-08-29 14:52:26.050349 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:52:26.050376 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:52:26.050382 | orchestrator | 2025-08-29 14:52:26.050385 | orchestrator | 2025-08-29 14:52:26.050389 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:52:26.050393 | orchestrator | Friday 29 August 2025 14:51:58 +0000 (0:00:04.049) 0:00:39.208 ********* 2025-08-29 14:52:26.050397 | orchestrator | =============================================================================== 2025-08-29 14:52:26.050400 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 26.84s 2025-08-29 14:52:26.050404 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 4.05s 2025-08-29 14:52:26.050408 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.42s 2025-08-29 14:52:26.050412 | orchestrator | osism.services.homer : Create traefik external network ------------------ 2.06s 2025-08-29 14:52:26.050415 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.74s 2025-08-29 14:52:26.050419 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.08s 2025-08-29 14:52:26.050423 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.31s 2025-08-29 14:52:26.050426 | orchestrator | 2025-08-29 14:52:26.050431 | orchestrator | 2025-08-29 14:52:26.050435 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-08-29 14:52:26.050438 | orchestrator | 2025-08-29 14:52:26.050444 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-08-29 14:52:26.050448 | orchestrator | Friday 29 August 2025 14:51:20 +0000 (0:00:00.503) 0:00:00.503 ********* 2025-08-29 14:52:26.050452 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-08-29 14:52:26.050456 | orchestrator | 2025-08-29 14:52:26.050460 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-08-29 14:52:26.050464 | orchestrator | Friday 29 August 2025 14:51:21 +0000 (0:00:00.490) 0:00:00.993 ********* 2025-08-29 14:52:26.050467 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-08-29 14:52:26.050471 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-08-29 14:52:26.050475 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-08-29 14:52:26.050478 | orchestrator | 2025-08-29 14:52:26.050482 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-08-29 14:52:26.050490 | orchestrator | Friday 29 August 2025 14:51:23 +0000 (0:00:01.826) 0:00:02.820 ********* 2025-08-29 14:52:26.050494 | orchestrator | changed: [testbed-manager] 2025-08-29 14:52:26.050498 | orchestrator | 2025-08-29 14:52:26.050501 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-08-29 14:52:26.050505 | orchestrator | Friday 29 August 2025 14:51:24 +0000 (0:00:01.866) 0:00:04.686 ********* 2025-08-29 14:52:26.050516 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-08-29 14:52:26.050520 | orchestrator | ok: [testbed-manager] 2025-08-29 14:52:26.050524 | orchestrator | 2025-08-29 14:52:26.050529 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-08-29 14:52:26.050534 | orchestrator | Friday 29 August 2025 14:52:04 +0000 (0:00:39.913) 0:00:44.600 ********* 2025-08-29 14:52:26.050541 | orchestrator | changed: [testbed-manager] 2025-08-29 14:52:26.050548 | orchestrator | 2025-08-29 14:52:26.050554 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-08-29 14:52:26.050560 | orchestrator | Friday 29 August 2025 14:52:06 +0000 (0:00:02.086) 0:00:46.687 ********* 2025-08-29 14:52:26.050581 | orchestrator | ok: [testbed-manager] 2025-08-29 14:52:26.050588 | orchestrator | 2025-08-29 14:52:26.050594 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-08-29 14:52:26.050601 | orchestrator | Friday 29 August 2025 14:52:08 +0000 (0:00:01.433) 0:00:48.120 ********* 2025-08-29 14:52:26.050607 | orchestrator | changed: [testbed-manager] 2025-08-29 14:52:26.050613 | orchestrator | 2025-08-29 14:52:26.050619 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-08-29 14:52:26.050626 | orchestrator | Friday 29 August 2025 14:52:12 +0000 (0:00:03.958) 0:00:52.078 ********* 2025-08-29 14:52:26.050633 | orchestrator | changed: [testbed-manager] 2025-08-29 14:52:26.050640 | orchestrator | 2025-08-29 14:52:26.050647 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-08-29 14:52:26.050653 | orchestrator | Friday 29 August 2025 14:52:14 +0000 (0:00:01.744) 0:00:53.822 ********* 2025-08-29 14:52:26.050659 | orchestrator | changed: [testbed-manager] 2025-08-29 14:52:26.050666 | orchestrator | 2025-08-29 14:52:26.050673 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-08-29 14:52:26.050679 | orchestrator | Friday 29 August 2025 14:52:15 +0000 (0:00:01.518) 0:00:55.341 ********* 2025-08-29 14:52:26.050685 | orchestrator | ok: [testbed-manager] 2025-08-29 14:52:26.050689 | orchestrator | 2025-08-29 14:52:26.050695 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:52:26.050702 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:52:26.050709 | orchestrator | 2025-08-29 14:52:26.050715 | orchestrator | 2025-08-29 14:52:26.050722 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:52:26.050727 | orchestrator | Friday 29 August 2025 14:52:15 +0000 (0:00:00.396) 0:00:55.737 ********* 2025-08-29 14:52:26.050731 | orchestrator | =============================================================================== 2025-08-29 14:52:26.050735 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 39.91s 2025-08-29 14:52:26.050739 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 3.96s 2025-08-29 14:52:26.050742 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 2.09s 2025-08-29 14:52:26.050747 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.87s 2025-08-29 14:52:26.050751 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.83s 2025-08-29 14:52:26.050756 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.74s 2025-08-29 14:52:26.050763 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.52s 2025-08-29 14:52:26.050769 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.43s 2025-08-29 14:52:26.050781 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.49s 2025-08-29 14:52:26.050789 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.40s 2025-08-29 14:52:26.050796 | orchestrator | 2025-08-29 14:52:26.050803 | orchestrator | 2025-08-29 14:52:26.050808 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 14:52:26.050812 | orchestrator | 2025-08-29 14:52:26.050817 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 14:52:26.050821 | orchestrator | Friday 29 August 2025 14:48:37 +0000 (0:00:00.265) 0:00:00.265 ********* 2025-08-29 14:52:26.050825 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:52:26.050829 | orchestrator | 2025-08-29 14:52:26.050833 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 14:52:26.050839 | orchestrator | Friday 29 August 2025 14:48:37 +0000 (0:00:00.124) 0:00:00.390 ********* 2025-08-29 14:52:26.050844 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-08-29 14:52:26.050850 | orchestrator | 2025-08-29 14:52:26.050857 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-08-29 14:52:26.050863 | orchestrator | 2025-08-29 14:52:26.050870 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-08-29 14:52:26.050876 | orchestrator | Friday 29 August 2025 14:48:38 +0000 (0:00:00.151) 0:00:00.541 ********* 2025-08-29 14:52:26.050882 | orchestrator | included: /ansible/roles/opensearch/tasks/pull.yml for testbed-node-0 2025-08-29 14:52:26.050886 | orchestrator | 2025-08-29 14:52:26.050889 | orchestrator | TASK [service-images-pull : opensearch | Pull images] ************************** 2025-08-29 14:52:26.050893 | orchestrator | Friday 29 August 2025 14:48:38 +0000 (0:00:00.192) 0:00:00.734 ********* 2025-08-29 14:52:26.050897 | orchestrator | 2025-08-29 14:52:26.050902 | orchestrator | STILL ALIVE [task 'service-images-pull : opensearch | Pull images' is running] *** 2025-08-29 14:52:26.050909 | orchestrator | changed: [testbed-node-0] => (item=opensearch) 2025-08-29 14:52:26.050914 | orchestrator | 2025-08-29 14:52:26.050920 | orchestrator | STILL ALIVE [task 'service-images-pull : opensearch | Pull images' is running] *** 2025-08-29 14:52:26.050930 | orchestrator | 2025-08-29 14:52:26.050936 | orchestrator | STILL ALIVE [task 'service-images-pull : opensearch | Pull images' is running] *** 2025-08-29 14:52:26.050942 | orchestrator | changed: [testbed-node-0] => (item=opensearch-dashboards) 2025-08-29 14:52:26.050948 | orchestrator | 2025-08-29 14:52:26.050954 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:52:26.050967 | orchestrator | testbed-node-0 : ok=4  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:52:26.050973 | orchestrator | 2025-08-29 14:52:26.050979 | orchestrator | 2025-08-29 14:52:26.050985 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:52:26.050991 | orchestrator | Friday 29 August 2025 14:52:23 +0000 (0:03:45.091) 0:03:45.825 ********* 2025-08-29 14:52:26.050997 | orchestrator | =============================================================================== 2025-08-29 14:52:26.051003 | orchestrator | service-images-pull : opensearch | Pull images ------------------------ 225.09s 2025-08-29 14:52:26.051009 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.19s 2025-08-29 14:52:26.051015 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.15s 2025-08-29 14:52:26.051021 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.12s 2025-08-29 14:52:26.054463 | orchestrator | 2025-08-29 14:52:26 | INFO  | Task a38a0f58-cb24-45c4-905f-3efe3e8d7845 is in state STARTED 2025-08-29 14:52:26.069490 | orchestrator | 2025-08-29 14:52:26 | INFO  | Task 37781dc7-1eb6-40e0-8d7f-9a1b3e635880 is in state STARTED 2025-08-29 14:52:26.073798 | orchestrator | 2025-08-29 14:52:26 | INFO  | Task 190a660a-f71b-4a60-a5d0-12d04324078b is in state STARTED 2025-08-29 14:52:26.074792 | orchestrator | 2025-08-29 14:52:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:29.142441 | orchestrator | 2025-08-29 14:52:29 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:52:29.149238 | orchestrator | 2025-08-29 14:52:29 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:52:29.153804 | orchestrator | 2025-08-29 14:52:29 | INFO  | Task a38a0f58-cb24-45c4-905f-3efe3e8d7845 is in state STARTED 2025-08-29 14:52:29.159545 | orchestrator | 2025-08-29 14:52:29 | INFO  | Task 37781dc7-1eb6-40e0-8d7f-9a1b3e635880 is in state STARTED 2025-08-29 14:52:29.167402 | orchestrator | 2025-08-29 14:52:29 | INFO  | Task 190a660a-f71b-4a60-a5d0-12d04324078b is in state STARTED 2025-08-29 14:52:29.167489 | orchestrator | 2025-08-29 14:52:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:32.242232 | orchestrator | 2025-08-29 14:52:32 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:52:32.247741 | orchestrator | 2025-08-29 14:52:32 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:52:32.249521 | orchestrator | 2025-08-29 14:52:32 | INFO  | Task a38a0f58-cb24-45c4-905f-3efe3e8d7845 is in state STARTED 2025-08-29 14:52:32.252083 | orchestrator | 2025-08-29 14:52:32 | INFO  | Task 37781dc7-1eb6-40e0-8d7f-9a1b3e635880 is in state STARTED 2025-08-29 14:52:32.256025 | orchestrator | 2025-08-29 14:52:32 | INFO  | Task 190a660a-f71b-4a60-a5d0-12d04324078b is in state STARTED 2025-08-29 14:52:32.258331 | orchestrator | 2025-08-29 14:52:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:35.358303 | orchestrator | 2025-08-29 14:52:35 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:52:35.360687 | orchestrator | 2025-08-29 14:52:35 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:52:35.365246 | orchestrator | 2025-08-29 14:52:35 | INFO  | Task a38a0f58-cb24-45c4-905f-3efe3e8d7845 is in state STARTED 2025-08-29 14:52:35.369047 | orchestrator | 2025-08-29 14:52:35 | INFO  | Task 37781dc7-1eb6-40e0-8d7f-9a1b3e635880 is in state STARTED 2025-08-29 14:52:35.371018 | orchestrator | 2025-08-29 14:52:35 | INFO  | Task 190a660a-f71b-4a60-a5d0-12d04324078b is in state STARTED 2025-08-29 14:52:35.371151 | orchestrator | 2025-08-29 14:52:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:38.430307 | orchestrator | 2025-08-29 14:52:38 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:52:38.431152 | orchestrator | 2025-08-29 14:52:38 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:52:38.432079 | orchestrator | 2025-08-29 14:52:38 | INFO  | Task a38a0f58-cb24-45c4-905f-3efe3e8d7845 is in state STARTED 2025-08-29 14:52:38.433079 | orchestrator | 2025-08-29 14:52:38 | INFO  | Task 37781dc7-1eb6-40e0-8d7f-9a1b3e635880 is in state STARTED 2025-08-29 14:52:38.434474 | orchestrator | 2025-08-29 14:52:38 | INFO  | Task 190a660a-f71b-4a60-a5d0-12d04324078b is in state STARTED 2025-08-29 14:52:38.434518 | orchestrator | 2025-08-29 14:52:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:41.465638 | orchestrator | 2025-08-29 14:52:41 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:52:41.467927 | orchestrator | 2025-08-29 14:52:41 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:52:41.470217 | orchestrator | 2025-08-29 14:52:41 | INFO  | Task a38a0f58-cb24-45c4-905f-3efe3e8d7845 is in state STARTED 2025-08-29 14:52:41.472435 | orchestrator | 2025-08-29 14:52:41 | INFO  | Task 37781dc7-1eb6-40e0-8d7f-9a1b3e635880 is in state STARTED 2025-08-29 14:52:41.475144 | orchestrator | 2025-08-29 14:52:41 | INFO  | Task 190a660a-f71b-4a60-a5d0-12d04324078b is in state STARTED 2025-08-29 14:52:41.475181 | orchestrator | 2025-08-29 14:52:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:44.507374 | orchestrator | 2025-08-29 14:52:44 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:52:44.508911 | orchestrator | 2025-08-29 14:52:44 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:52:44.510392 | orchestrator | 2025-08-29 14:52:44 | INFO  | Task a38a0f58-cb24-45c4-905f-3efe3e8d7845 is in state STARTED 2025-08-29 14:52:44.513442 | orchestrator | 2025-08-29 14:52:44 | INFO  | Task 37781dc7-1eb6-40e0-8d7f-9a1b3e635880 is in state STARTED 2025-08-29 14:52:44.514212 | orchestrator | 2025-08-29 14:52:44 | INFO  | Task 190a660a-f71b-4a60-a5d0-12d04324078b is in state SUCCESS 2025-08-29 14:52:44.514759 | orchestrator | 2025-08-29 14:52:44.514795 | orchestrator | 2025-08-29 14:52:44.514801 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 14:52:44.514806 | orchestrator | 2025-08-29 14:52:44.514811 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 14:52:44.514815 | orchestrator | Friday 29 August 2025 14:51:20 +0000 (0:00:00.602) 0:00:00.602 ********* 2025-08-29 14:52:44.514820 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-08-29 14:52:44.514825 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-08-29 14:52:44.514830 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-08-29 14:52:44.514834 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-08-29 14:52:44.514838 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-08-29 14:52:44.514841 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-08-29 14:52:44.514845 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-08-29 14:52:44.514849 | orchestrator | 2025-08-29 14:52:44.514855 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-08-29 14:52:44.514861 | orchestrator | 2025-08-29 14:52:44.514867 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-08-29 14:52:44.514873 | orchestrator | Friday 29 August 2025 14:51:23 +0000 (0:00:02.739) 0:00:03.342 ********* 2025-08-29 14:52:44.514895 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-5, testbed-node-4 2025-08-29 14:52:44.514907 | orchestrator | 2025-08-29 14:52:44.514916 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-08-29 14:52:44.514925 | orchestrator | Friday 29 August 2025 14:51:26 +0000 (0:00:03.009) 0:00:06.352 ********* 2025-08-29 14:52:44.514933 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:52:44.514940 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:52:44.514960 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:52:44.514966 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:52:44.514973 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:52:44.514987 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:52:44.514994 | orchestrator | ok: [testbed-manager] 2025-08-29 14:52:44.515001 | orchestrator | 2025-08-29 14:52:44.515007 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-08-29 14:52:44.515014 | orchestrator | Friday 29 August 2025 14:51:27 +0000 (0:00:01.779) 0:00:08.131 ********* 2025-08-29 14:52:44.515021 | orchestrator | ok: [testbed-manager] 2025-08-29 14:52:44.515028 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:52:44.515034 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:52:44.515050 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:52:44.515057 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:52:44.515063 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:52:44.515095 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:52:44.515103 | orchestrator | 2025-08-29 14:52:44.515109 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-08-29 14:52:44.515115 | orchestrator | Friday 29 August 2025 14:51:32 +0000 (0:00:04.741) 0:00:12.873 ********* 2025-08-29 14:52:44.515122 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:52:44.515129 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:52:44.515135 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:52:44.515141 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:52:44.515147 | orchestrator | changed: [testbed-manager] 2025-08-29 14:52:44.515153 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:52:44.515160 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:52:44.515166 | orchestrator | 2025-08-29 14:52:44.515173 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-08-29 14:52:44.515180 | orchestrator | Friday 29 August 2025 14:51:36 +0000 (0:00:03.361) 0:00:16.235 ********* 2025-08-29 14:52:44.515186 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:52:44.515192 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:52:44.515198 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:52:44.515204 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:52:44.515210 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:52:44.515216 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:52:44.515223 | orchestrator | changed: [testbed-manager] 2025-08-29 14:52:44.515230 | orchestrator | 2025-08-29 14:52:44.515236 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-08-29 14:52:44.515243 | orchestrator | Friday 29 August 2025 14:51:48 +0000 (0:00:12.668) 0:00:28.903 ********* 2025-08-29 14:52:44.515297 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:52:44.515305 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:52:44.515312 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:52:44.515317 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:52:44.515323 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:52:44.515329 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:52:44.515335 | orchestrator | changed: [testbed-manager] 2025-08-29 14:52:44.515340 | orchestrator | 2025-08-29 14:52:44.515346 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-08-29 14:52:44.515352 | orchestrator | Friday 29 August 2025 14:52:17 +0000 (0:00:29.194) 0:00:58.098 ********* 2025-08-29 14:52:44.515360 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:52:44.515368 | orchestrator | 2025-08-29 14:52:44.515375 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-08-29 14:52:44.515382 | orchestrator | Friday 29 August 2025 14:52:19 +0000 (0:00:01.523) 0:00:59.622 ********* 2025-08-29 14:52:44.515388 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-08-29 14:52:44.515396 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-08-29 14:52:44.515403 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-08-29 14:52:44.515410 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-08-29 14:52:44.515429 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-08-29 14:52:44.515436 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-08-29 14:52:44.515442 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-08-29 14:52:44.515450 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-08-29 14:52:44.515455 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-08-29 14:52:44.515461 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-08-29 14:52:44.515466 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-08-29 14:52:44.515474 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-08-29 14:52:44.515480 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-08-29 14:52:44.515496 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-08-29 14:52:44.515502 | orchestrator | 2025-08-29 14:52:44.515508 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-08-29 14:52:44.515516 | orchestrator | Friday 29 August 2025 14:52:25 +0000 (0:00:06.116) 0:01:05.739 ********* 2025-08-29 14:52:44.515523 | orchestrator | ok: [testbed-manager] 2025-08-29 14:52:44.515529 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:52:44.515535 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:52:44.515541 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:52:44.515547 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:52:44.515554 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:52:44.515560 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:52:44.515566 | orchestrator | 2025-08-29 14:52:44.515572 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-08-29 14:52:44.515578 | orchestrator | Friday 29 August 2025 14:52:27 +0000 (0:00:02.246) 0:01:07.985 ********* 2025-08-29 14:52:44.515584 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:52:44.515591 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:52:44.515598 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:52:44.515604 | orchestrator | changed: [testbed-manager] 2025-08-29 14:52:44.515610 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:52:44.515615 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:52:44.515622 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:52:44.515628 | orchestrator | 2025-08-29 14:52:44.515634 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-08-29 14:52:44.515640 | orchestrator | Friday 29 August 2025 14:52:30 +0000 (0:00:02.464) 0:01:10.450 ********* 2025-08-29 14:52:44.515646 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:52:44.515653 | orchestrator | ok: [testbed-manager] 2025-08-29 14:52:44.515659 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:52:44.515666 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:52:44.515673 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:52:44.515678 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:52:44.515684 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:52:44.515689 | orchestrator | 2025-08-29 14:52:44.515698 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-08-29 14:52:44.515711 | orchestrator | Friday 29 August 2025 14:52:32 +0000 (0:00:02.564) 0:01:13.015 ********* 2025-08-29 14:52:44.515718 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:52:44.515724 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:52:44.515730 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:52:44.515735 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:52:44.515741 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:52:44.515747 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:52:44.515753 | orchestrator | ok: [testbed-manager] 2025-08-29 14:52:44.515758 | orchestrator | 2025-08-29 14:52:44.515764 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-08-29 14:52:44.515770 | orchestrator | Friday 29 August 2025 14:52:35 +0000 (0:00:02.578) 0:01:15.593 ********* 2025-08-29 14:52:44.515776 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-08-29 14:52:44.515785 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:52:44.515794 | orchestrator | 2025-08-29 14:52:44.515800 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-08-29 14:52:44.515806 | orchestrator | Friday 29 August 2025 14:52:37 +0000 (0:00:02.029) 0:01:17.623 ********* 2025-08-29 14:52:44.515812 | orchestrator | changed: [testbed-manager] 2025-08-29 14:52:44.515818 | orchestrator | 2025-08-29 14:52:44.515823 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-08-29 14:52:44.515828 | orchestrator | Friday 29 August 2025 14:52:39 +0000 (0:00:02.163) 0:01:19.786 ********* 2025-08-29 14:52:44.515841 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:52:44.515847 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:52:44.515853 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:52:44.515859 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:52:44.515865 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:52:44.515871 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:52:44.515877 | orchestrator | changed: [testbed-manager] 2025-08-29 14:52:44.515883 | orchestrator | 2025-08-29 14:52:44.515889 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:52:44.515896 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:52:44.515904 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:52:44.515910 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:52:44.515917 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:52:44.515932 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:52:44.515939 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:52:44.515946 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:52:44.515952 | orchestrator | 2025-08-29 14:52:44.515958 | orchestrator | 2025-08-29 14:52:44.515965 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:52:44.515971 | orchestrator | Friday 29 August 2025 14:52:42 +0000 (0:00:03.258) 0:01:23.045 ********* 2025-08-29 14:52:44.515977 | orchestrator | =============================================================================== 2025-08-29 14:52:44.515984 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 29.19s 2025-08-29 14:52:44.515990 | orchestrator | osism.services.netdata : Add repository -------------------------------- 12.67s 2025-08-29 14:52:44.515996 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 6.12s 2025-08-29 14:52:44.516003 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 4.75s 2025-08-29 14:52:44.516009 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 3.36s 2025-08-29 14:52:44.516016 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.26s 2025-08-29 14:52:44.516022 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 3.01s 2025-08-29 14:52:44.516028 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.73s 2025-08-29 14:52:44.516034 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.58s 2025-08-29 14:52:44.516041 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 2.56s 2025-08-29 14:52:44.516047 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 2.47s 2025-08-29 14:52:44.516053 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 2.25s 2025-08-29 14:52:44.516060 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.16s 2025-08-29 14:52:44.516066 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 2.03s 2025-08-29 14:52:44.516071 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.78s 2025-08-29 14:52:44.516081 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.52s 2025-08-29 14:52:44.516087 | orchestrator | 2025-08-29 14:52:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:47.561450 | orchestrator | 2025-08-29 14:52:47 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:52:47.561550 | orchestrator | 2025-08-29 14:52:47 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:52:47.561736 | orchestrator | 2025-08-29 14:52:47 | INFO  | Task a38a0f58-cb24-45c4-905f-3efe3e8d7845 is in state STARTED 2025-08-29 14:52:47.561923 | orchestrator | 2025-08-29 14:52:47 | INFO  | Task 37781dc7-1eb6-40e0-8d7f-9a1b3e635880 is in state SUCCESS 2025-08-29 14:52:47.561934 | orchestrator | 2025-08-29 14:52:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:50.599782 | orchestrator | 2025-08-29 14:52:50 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:52:50.600098 | orchestrator | 2025-08-29 14:52:50 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:52:50.600885 | orchestrator | 2025-08-29 14:52:50 | INFO  | Task a38a0f58-cb24-45c4-905f-3efe3e8d7845 is in state STARTED 2025-08-29 14:52:50.601300 | orchestrator | 2025-08-29 14:52:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:53.638898 | orchestrator | 2025-08-29 14:52:53 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:52:53.641855 | orchestrator | 2025-08-29 14:52:53 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:52:53.642155 | orchestrator | 2025-08-29 14:52:53 | INFO  | Task a38a0f58-cb24-45c4-905f-3efe3e8d7845 is in state STARTED 2025-08-29 14:52:53.642197 | orchestrator | 2025-08-29 14:52:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:56.675908 | orchestrator | 2025-08-29 14:52:56 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:52:56.677044 | orchestrator | 2025-08-29 14:52:56 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:52:56.678675 | orchestrator | 2025-08-29 14:52:56 | INFO  | Task a38a0f58-cb24-45c4-905f-3efe3e8d7845 is in state STARTED 2025-08-29 14:52:56.678714 | orchestrator | 2025-08-29 14:52:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:52:59.724748 | orchestrator | 2025-08-29 14:52:59 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:52:59.724851 | orchestrator | 2025-08-29 14:52:59 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:52:59.726851 | orchestrator | 2025-08-29 14:52:59 | INFO  | Task a38a0f58-cb24-45c4-905f-3efe3e8d7845 is in state STARTED 2025-08-29 14:52:59.726893 | orchestrator | 2025-08-29 14:52:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:02.790800 | orchestrator | 2025-08-29 14:53:02 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:53:02.792091 | orchestrator | 2025-08-29 14:53:02 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:53:02.792135 | orchestrator | 2025-08-29 14:53:02 | INFO  | Task a38a0f58-cb24-45c4-905f-3efe3e8d7845 is in state STARTED 2025-08-29 14:53:02.792144 | orchestrator | 2025-08-29 14:53:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:05.840982 | orchestrator | 2025-08-29 14:53:05 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:53:05.842782 | orchestrator | 2025-08-29 14:53:05 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:53:05.844547 | orchestrator | 2025-08-29 14:53:05 | INFO  | Task a38a0f58-cb24-45c4-905f-3efe3e8d7845 is in state STARTED 2025-08-29 14:53:05.844614 | orchestrator | 2025-08-29 14:53:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:08.883017 | orchestrator | 2025-08-29 14:53:08 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:53:08.883727 | orchestrator | 2025-08-29 14:53:08 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:53:08.885161 | orchestrator | 2025-08-29 14:53:08 | INFO  | Task a38a0f58-cb24-45c4-905f-3efe3e8d7845 is in state STARTED 2025-08-29 14:53:08.885197 | orchestrator | 2025-08-29 14:53:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:11.929775 | orchestrator | 2025-08-29 14:53:11 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:53:11.931580 | orchestrator | 2025-08-29 14:53:11 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:53:11.933345 | orchestrator | 2025-08-29 14:53:11 | INFO  | Task a38a0f58-cb24-45c4-905f-3efe3e8d7845 is in state STARTED 2025-08-29 14:53:11.933423 | orchestrator | 2025-08-29 14:53:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:14.967988 | orchestrator | 2025-08-29 14:53:14 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:53:14.970318 | orchestrator | 2025-08-29 14:53:14 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:53:14.972589 | orchestrator | 2025-08-29 14:53:14 | INFO  | Task a38a0f58-cb24-45c4-905f-3efe3e8d7845 is in state STARTED 2025-08-29 14:53:14.973679 | orchestrator | 2025-08-29 14:53:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:18.015769 | orchestrator | 2025-08-29 14:53:18 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:53:18.017157 | orchestrator | 2025-08-29 14:53:18 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:53:18.018465 | orchestrator | 2025-08-29 14:53:18 | INFO  | Task a38a0f58-cb24-45c4-905f-3efe3e8d7845 is in state STARTED 2025-08-29 14:53:18.018825 | orchestrator | 2025-08-29 14:53:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:21.057345 | orchestrator | 2025-08-29 14:53:21 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:53:21.059073 | orchestrator | 2025-08-29 14:53:21 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:53:21.062470 | orchestrator | 2025-08-29 14:53:21 | INFO  | Task a38a0f58-cb24-45c4-905f-3efe3e8d7845 is in state STARTED 2025-08-29 14:53:21.062806 | orchestrator | 2025-08-29 14:53:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:24.098476 | orchestrator | 2025-08-29 14:53:24 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:53:24.099531 | orchestrator | 2025-08-29 14:53:24 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:53:24.100865 | orchestrator | 2025-08-29 14:53:24 | INFO  | Task a38a0f58-cb24-45c4-905f-3efe3e8d7845 is in state STARTED 2025-08-29 14:53:24.100899 | orchestrator | 2025-08-29 14:53:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:27.135852 | orchestrator | 2025-08-29 14:53:27 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:53:27.136170 | orchestrator | 2025-08-29 14:53:27 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:53:27.137133 | orchestrator | 2025-08-29 14:53:27 | INFO  | Task a38a0f58-cb24-45c4-905f-3efe3e8d7845 is in state STARTED 2025-08-29 14:53:27.137186 | orchestrator | 2025-08-29 14:53:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:30.171794 | orchestrator | 2025-08-29 14:53:30 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:53:30.176464 | orchestrator | 2025-08-29 14:53:30 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:53:30.176528 | orchestrator | 2025-08-29 14:53:30 | INFO  | Task a38a0f58-cb24-45c4-905f-3efe3e8d7845 is in state STARTED 2025-08-29 14:53:30.176944 | orchestrator | 2025-08-29 14:53:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:33.224601 | orchestrator | 2025-08-29 14:53:33 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:53:33.225472 | orchestrator | 2025-08-29 14:53:33 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:53:33.226468 | orchestrator | 2025-08-29 14:53:33 | INFO  | Task a38a0f58-cb24-45c4-905f-3efe3e8d7845 is in state STARTED 2025-08-29 14:53:33.226766 | orchestrator | 2025-08-29 14:53:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:36.264023 | orchestrator | 2025-08-29 14:53:36 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:53:36.265879 | orchestrator | 2025-08-29 14:53:36 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:53:36.266811 | orchestrator | 2025-08-29 14:53:36 | INFO  | Task a38a0f58-cb24-45c4-905f-3efe3e8d7845 is in state STARTED 2025-08-29 14:53:36.267077 | orchestrator | 2025-08-29 14:53:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:39.303811 | orchestrator | 2025-08-29 14:53:39 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:53:39.305370 | orchestrator | 2025-08-29 14:53:39 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:53:39.307301 | orchestrator | 2025-08-29 14:53:39 | INFO  | Task a38a0f58-cb24-45c4-905f-3efe3e8d7845 is in state STARTED 2025-08-29 14:53:39.307340 | orchestrator | 2025-08-29 14:53:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:42.347753 | orchestrator | 2025-08-29 14:53:42 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:53:42.348819 | orchestrator | 2025-08-29 14:53:42 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:53:42.349748 | orchestrator | 2025-08-29 14:53:42 | INFO  | Task a38a0f58-cb24-45c4-905f-3efe3e8d7845 is in state STARTED 2025-08-29 14:53:42.349795 | orchestrator | 2025-08-29 14:53:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:45.402937 | orchestrator | 2025-08-29 14:53:45 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:53:45.406639 | orchestrator | 2025-08-29 14:53:45 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:53:45.409301 | orchestrator | 2025-08-29 14:53:45 | INFO  | Task a38a0f58-cb24-45c4-905f-3efe3e8d7845 is in state STARTED 2025-08-29 14:53:45.409344 | orchestrator | 2025-08-29 14:53:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:48.459164 | orchestrator | 2025-08-29 14:53:48 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:53:48.461328 | orchestrator | 2025-08-29 14:53:48 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:53:48.462834 | orchestrator | 2025-08-29 14:53:48 | INFO  | Task a38a0f58-cb24-45c4-905f-3efe3e8d7845 is in state STARTED 2025-08-29 14:53:48.462927 | orchestrator | 2025-08-29 14:53:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:51.505345 | orchestrator | 2025-08-29 14:53:51 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:53:51.505440 | orchestrator | 2025-08-29 14:53:51 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:53:51.505912 | orchestrator | 2025-08-29 14:53:51 | INFO  | Task a38a0f58-cb24-45c4-905f-3efe3e8d7845 is in state STARTED 2025-08-29 14:53:51.505922 | orchestrator | 2025-08-29 14:53:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:54.547189 | orchestrator | 2025-08-29 14:53:54 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:53:54.550400 | orchestrator | 2025-08-29 14:53:54 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:53:54.552392 | orchestrator | 2025-08-29 14:53:54 | INFO  | Task a38a0f58-cb24-45c4-905f-3efe3e8d7845 is in state STARTED 2025-08-29 14:53:54.552480 | orchestrator | 2025-08-29 14:53:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:53:57.602524 | orchestrator | 2025-08-29 14:53:57 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:53:57.602617 | orchestrator | 2025-08-29 14:53:57 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:53:57.602714 | orchestrator | 2025-08-29 14:53:57 | INFO  | Task a38a0f58-cb24-45c4-905f-3efe3e8d7845 is in state STARTED 2025-08-29 14:53:57.602723 | orchestrator | 2025-08-29 14:53:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:00.672767 | orchestrator | 2025-08-29 14:54:00 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:54:00.674054 | orchestrator | 2025-08-29 14:54:00 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:54:00.675160 | orchestrator | 2025-08-29 14:54:00 | INFO  | Task a38a0f58-cb24-45c4-905f-3efe3e8d7845 is in state STARTED 2025-08-29 14:54:00.675227 | orchestrator | 2025-08-29 14:54:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:03.729817 | orchestrator | 2025-08-29 14:54:03 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:54:03.731516 | orchestrator | 2025-08-29 14:54:03 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:54:03.733361 | orchestrator | 2025-08-29 14:54:03 | INFO  | Task a38a0f58-cb24-45c4-905f-3efe3e8d7845 is in state STARTED 2025-08-29 14:54:03.733418 | orchestrator | 2025-08-29 14:54:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:06.785094 | orchestrator | 2025-08-29 14:54:06 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:54:06.787507 | orchestrator | 2025-08-29 14:54:06 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:54:06.797353 | orchestrator | 2025-08-29 14:54:06.797416 | orchestrator | 2025-08-29 14:54:06.797429 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-08-29 14:54:06.797440 | orchestrator | 2025-08-29 14:54:06.797451 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-08-29 14:54:06.797461 | orchestrator | Friday 29 August 2025 14:51:42 +0000 (0:00:00.286) 0:00:00.286 ********* 2025-08-29 14:54:06.797471 | orchestrator | ok: [testbed-manager] 2025-08-29 14:54:06.797482 | orchestrator | 2025-08-29 14:54:06.797492 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-08-29 14:54:06.797502 | orchestrator | Friday 29 August 2025 14:51:43 +0000 (0:00:01.081) 0:00:01.367 ********* 2025-08-29 14:54:06.797513 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-08-29 14:54:06.797523 | orchestrator | 2025-08-29 14:54:06.797533 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-08-29 14:54:06.797542 | orchestrator | Friday 29 August 2025 14:51:43 +0000 (0:00:00.719) 0:00:02.087 ********* 2025-08-29 14:54:06.797573 | orchestrator | changed: [testbed-manager] 2025-08-29 14:54:06.797583 | orchestrator | 2025-08-29 14:54:06.797593 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-08-29 14:54:06.797602 | orchestrator | Friday 29 August 2025 14:51:45 +0000 (0:00:01.157) 0:00:03.244 ********* 2025-08-29 14:54:06.797612 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-08-29 14:54:06.797622 | orchestrator | ok: [testbed-manager] 2025-08-29 14:54:06.797631 | orchestrator | 2025-08-29 14:54:06.797641 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-08-29 14:54:06.797650 | orchestrator | Friday 29 August 2025 14:52:34 +0000 (0:00:49.379) 0:00:52.624 ********* 2025-08-29 14:54:06.797659 | orchestrator | changed: [testbed-manager] 2025-08-29 14:54:06.797669 | orchestrator | 2025-08-29 14:54:06.797678 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:54:06.797688 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:54:06.797700 | orchestrator | 2025-08-29 14:54:06.797709 | orchestrator | 2025-08-29 14:54:06.797719 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:54:06.797729 | orchestrator | Friday 29 August 2025 14:52:44 +0000 (0:00:10.310) 0:01:02.934 ********* 2025-08-29 14:54:06.797738 | orchestrator | =============================================================================== 2025-08-29 14:54:06.797748 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 49.38s 2025-08-29 14:54:06.797757 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ----------------- 10.31s 2025-08-29 14:54:06.797767 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.16s 2025-08-29 14:54:06.797776 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.08s 2025-08-29 14:54:06.797786 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.72s 2025-08-29 14:54:06.797795 | orchestrator | 2025-08-29 14:54:06.797804 | orchestrator | 2025-08-29 14:54:06.797814 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-08-29 14:54:06.797824 | orchestrator | 2025-08-29 14:54:06.797833 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-08-29 14:54:06.797843 | orchestrator | Friday 29 August 2025 14:51:11 +0000 (0:00:00.368) 0:00:00.368 ********* 2025-08-29 14:54:06.797853 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:54:06.797864 | orchestrator | 2025-08-29 14:54:06.797874 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-08-29 14:54:06.797883 | orchestrator | Friday 29 August 2025 14:51:12 +0000 (0:00:01.744) 0:00:02.113 ********* 2025-08-29 14:54:06.797893 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-08-29 14:54:06.797903 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-08-29 14:54:06.797912 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-08-29 14:54:06.797922 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-08-29 14:54:06.797931 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-08-29 14:54:06.797941 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-08-29 14:54:06.797952 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-08-29 14:54:06.797963 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-08-29 14:54:06.797975 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-08-29 14:54:06.797988 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-08-29 14:54:06.798005 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-08-29 14:54:06.798088 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-08-29 14:54:06.798103 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-08-29 14:54:06.798114 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-08-29 14:54:06.798126 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-08-29 14:54:06.798144 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-08-29 14:54:06.798171 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-08-29 14:54:06.798184 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-08-29 14:54:06.798219 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-08-29 14:54:06.798230 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-08-29 14:54:06.798241 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-08-29 14:54:06.798253 | orchestrator | 2025-08-29 14:54:06.798263 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-08-29 14:54:06.798275 | orchestrator | Friday 29 August 2025 14:51:16 +0000 (0:00:04.017) 0:00:06.130 ********* 2025-08-29 14:54:06.798286 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:54:06.798299 | orchestrator | 2025-08-29 14:54:06.798310 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-08-29 14:54:06.798321 | orchestrator | Friday 29 August 2025 14:51:18 +0000 (0:00:01.211) 0:00:07.342 ********* 2025-08-29 14:54:06.798337 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:54:06.798352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:54:06.798363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:54:06.798374 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:54:06.798392 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:54:06.798415 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:06.798427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:06.798438 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:06.798449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:06.798459 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:54:06.798469 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:54:06.798485 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:06.798496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:06.798542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:06.798553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:06.798564 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:06.798575 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:06.798585 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:06.798595 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:06.798611 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:06.798621 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:06.798631 | orchestrator | 2025-08-29 14:54:06.798641 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-08-29 14:54:06.798651 | orchestrator | Friday 29 August 2025 14:51:23 +0000 (0:00:05.459) 0:00:12.802 ********* 2025-08-29 14:54:06.798681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 14:54:06.798693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:54:06.798704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:54:06.798714 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:06.798725 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 14:54:06.798735 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:54:06.798751 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:54:06.798761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 14:54:06.798771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:54:06.798794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 14:54:06.798805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:54:06.798816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:54:06.798826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:54:06.798841 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:54:06.798852 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 14:54:06.798862 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:54:06.798872 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:54:06.798882 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:54:06.798892 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:54:06.798902 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:54:06.798916 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 14:54:06.798933 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 14:54:06.798944 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:54:06.798954 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:54:06.798969 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:54:06.798980 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:54:06.798990 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:54:06.798999 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:54:06.799009 | orchestrator | 2025-08-29 14:54:06.799019 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-08-29 14:54:06.799029 | orchestrator | Friday 29 August 2025 14:51:24 +0000 (0:00:01.231) 0:00:14.034 ********* 2025-08-29 14:54:06.799040 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 14:54:06.799051 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:54:06.799074 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:54:06.799085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 14:54:06.799096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:54:06.799112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:54:06.799123 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:54:06.799133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 14:54:06.799144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:54:06.799154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:54:06.799164 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:06.799174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 14:54:06.799209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:54:06.799220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:54:06.799236 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:54:06.799246 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:54:06.799256 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 14:54:06.799267 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:54:06.799277 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:54:06.799287 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:54:06.799297 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 14:54:06.799307 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:54:06.799327 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:54:06.799338 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:54:06.799348 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-08-29 14:54:06.799364 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:54:06.799374 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:54:06.799384 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:54:06.799394 | orchestrator | 2025-08-29 14:54:06.799404 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-08-29 14:54:06.799414 | orchestrator | Friday 29 August 2025 14:51:27 +0000 (0:00:02.277) 0:00:16.311 ********* 2025-08-29 14:54:06.799424 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:54:06.799434 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:06.799444 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:54:06.799454 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:54:06.799464 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:54:06.799473 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:54:06.799483 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:54:06.799493 | orchestrator | 2025-08-29 14:54:06.799503 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-08-29 14:54:06.799513 | orchestrator | Friday 29 August 2025 14:51:28 +0000 (0:00:01.443) 0:00:17.755 ********* 2025-08-29 14:54:06.799523 | orchestrator | skipping: [testbed-manager] 2025-08-29 14:54:06.799533 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:06.799542 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:54:06.799552 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:54:06.799562 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:54:06.799572 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:54:06.799581 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:54:06.799591 | orchestrator | 2025-08-29 14:54:06.799600 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-08-29 14:54:06.799610 | orchestrator | Friday 29 August 2025 14:51:29 +0000 (0:00:00.867) 0:00:18.623 ********* 2025-08-29 14:54:06.799621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:54:06.799632 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:54:06.799655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:54:06.799672 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:06.799682 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:54:06.799693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:54:06.799703 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:54:06.799713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:06.799724 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:54:06.799746 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:06.799783 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:06.799795 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:06.799805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:06.799815 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:06.799826 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:06.799836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:06.799846 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:06.799877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:06.799888 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:06.799898 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:06.799908 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:06.799918 | orchestrator | 2025-08-29 14:54:06.799928 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-08-29 14:54:06.799938 | orchestrator | Friday 29 August 2025 14:51:39 +0000 (0:00:10.412) 0:00:29.036 ********* 2025-08-29 14:54:06.799948 | orchestrator | [WARNING]: Skipped 2025-08-29 14:54:06.799959 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-08-29 14:54:06.799969 | orchestrator | to this access issue: 2025-08-29 14:54:06.799979 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-08-29 14:54:06.799989 | orchestrator | directory 2025-08-29 14:54:06.799999 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 14:54:06.800009 | orchestrator | 2025-08-29 14:54:06.800019 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-08-29 14:54:06.800029 | orchestrator | Friday 29 August 2025 14:51:42 +0000 (0:00:02.756) 0:00:31.793 ********* 2025-08-29 14:54:06.800039 | orchestrator | [WARNING]: Skipped 2025-08-29 14:54:06.800049 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-08-29 14:54:06.800059 | orchestrator | to this access issue: 2025-08-29 14:54:06.800069 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-08-29 14:54:06.800078 | orchestrator | directory 2025-08-29 14:54:06.800088 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 14:54:06.800098 | orchestrator | 2025-08-29 14:54:06.800108 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-08-29 14:54:06.800118 | orchestrator | Friday 29 August 2025 14:51:43 +0000 (0:00:01.439) 0:00:33.232 ********* 2025-08-29 14:54:06.800128 | orchestrator | [WARNING]: Skipped 2025-08-29 14:54:06.800138 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-08-29 14:54:06.800147 | orchestrator | to this access issue: 2025-08-29 14:54:06.800157 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-08-29 14:54:06.800172 | orchestrator | directory 2025-08-29 14:54:06.800182 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 14:54:06.800238 | orchestrator | 2025-08-29 14:54:06.800248 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-08-29 14:54:06.800258 | orchestrator | Friday 29 August 2025 14:51:45 +0000 (0:00:01.077) 0:00:34.310 ********* 2025-08-29 14:54:06.800268 | orchestrator | [WARNING]: Skipped 2025-08-29 14:54:06.800278 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-08-29 14:54:06.800287 | orchestrator | to this access issue: 2025-08-29 14:54:06.800297 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-08-29 14:54:06.800307 | orchestrator | directory 2025-08-29 14:54:06.800316 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 14:54:06.800326 | orchestrator | 2025-08-29 14:54:06.800336 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-08-29 14:54:06.800346 | orchestrator | Friday 29 August 2025 14:51:45 +0000 (0:00:00.952) 0:00:35.262 ********* 2025-08-29 14:54:06.800355 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:54:06.800365 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:54:06.800375 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:54:06.800385 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:54:06.800394 | orchestrator | changed: [testbed-manager] 2025-08-29 14:54:06.800404 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:54:06.800414 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:54:06.800423 | orchestrator | 2025-08-29 14:54:06.800433 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-08-29 14:54:06.800443 | orchestrator | Friday 29 August 2025 14:51:49 +0000 (0:00:03.634) 0:00:38.896 ********* 2025-08-29 14:54:06.800453 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-08-29 14:54:06.800462 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-08-29 14:54:06.800472 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-08-29 14:54:06.800493 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-08-29 14:54:06.800503 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-08-29 14:54:06.800513 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-08-29 14:54:06.800523 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-08-29 14:54:06.800533 | orchestrator | 2025-08-29 14:54:06.800543 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-08-29 14:54:06.800552 | orchestrator | Friday 29 August 2025 14:51:53 +0000 (0:00:03.458) 0:00:42.355 ********* 2025-08-29 14:54:06.800562 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:54:06.800572 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:54:06.800581 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:54:06.800591 | orchestrator | changed: [testbed-manager] 2025-08-29 14:54:06.800601 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:54:06.800610 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:54:06.800620 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:54:06.800629 | orchestrator | 2025-08-29 14:54:06.800639 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-08-29 14:54:06.800649 | orchestrator | Friday 29 August 2025 14:51:56 +0000 (0:00:03.726) 0:00:46.081 ********* 2025-08-29 14:54:06.800659 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:54:06.800676 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:54:06.800686 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:54:06.800697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:54:06.800708 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:54:06.800734 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:54:06.800745 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:06.800760 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:54:06.800776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:54:06.800786 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:54:06.800797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:54:06.800807 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:06.800817 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:06.800837 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:54:06.800848 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:54:06.800858 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:06.800874 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:54:06.800884 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 14:54:06.800894 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:06.800905 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:06.800915 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:06.800925 | orchestrator | 2025-08-29 14:54:06.800935 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-08-29 14:54:06.800945 | orchestrator | Friday 29 August 2025 14:51:59 +0000 (0:00:02.755) 0:00:48.836 ********* 2025-08-29 14:54:06.800954 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-08-29 14:54:06.800964 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-08-29 14:54:06.800978 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-08-29 14:54:06.800993 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-08-29 14:54:06.801003 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-08-29 14:54:06.801013 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-08-29 14:54:06.801023 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-08-29 14:54:06.801032 | orchestrator | 2025-08-29 14:54:06.801042 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-08-29 14:54:06.801059 | orchestrator | Friday 29 August 2025 14:52:03 +0000 (0:00:04.299) 0:00:53.136 ********* 2025-08-29 14:54:06.801069 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-08-29 14:54:06.801079 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-08-29 14:54:06.801088 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-08-29 14:54:06.801098 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-08-29 14:54:06.801108 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-08-29 14:54:06.801117 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-08-29 14:54:06.801127 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-08-29 14:54:06.801137 | orchestrator | 2025-08-29 14:54:06.801146 | orchestrator | TASK [common : Check common containers] **************************************** 2025-08-29 14:54:06.801156 | orchestrator | Friday 29 August 2025 14:52:06 +0000 (0:00:02.974) 0:00:56.111 ********* 2025-08-29 14:54:06.801166 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:54:06.801239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:54:06.801253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:54:06.801263 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:54:06.801274 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:54:06.801296 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:54:06.801313 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:06.801324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:06.801334 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:06.801345 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-08-29 14:54:06.801355 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:06.801365 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:06.801399 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:06.801411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:06.801421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:06.801432 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:06.801442 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:06.801453 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:06.801463 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:06.801473 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:06.801490 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:54:06.801500 | orchestrator | 2025-08-29 14:54:06.801518 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-08-29 14:54:06.801529 | orchestrator | Friday 29 August 2025 14:52:11 +0000 (0:00:04.329) 0:01:00.441 ********* 2025-08-29 14:54:06.801539 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:54:06.801549 | orchestrator | changed: [testbed-manager] 2025-08-29 14:54:06.801559 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:54:06.801568 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:54:06.801578 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:54:06.801588 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:54:06.801597 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:54:06.801605 | orchestrator | 2025-08-29 14:54:06.801613 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-08-29 14:54:06.801621 | orchestrator | Friday 29 August 2025 14:52:13 +0000 (0:00:02.374) 0:01:02.815 ********* 2025-08-29 14:54:06.801630 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:54:06.801638 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:54:06.801646 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:54:06.801654 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:54:06.801662 | orchestrator | changed: [testbed-manager] 2025-08-29 14:54:06.801670 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:54:06.801678 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:54:06.801686 | orchestrator | 2025-08-29 14:54:06.801693 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-08-29 14:54:06.801702 | orchestrator | Friday 29 August 2025 14:52:15 +0000 (0:00:01.729) 0:01:04.545 ********* 2025-08-29 14:54:06.801710 | orchestrator | 2025-08-29 14:54:06.801718 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-08-29 14:54:06.801725 | orchestrator | Friday 29 August 2025 14:52:15 +0000 (0:00:00.076) 0:01:04.621 ********* 2025-08-29 14:54:06.801733 | orchestrator | 2025-08-29 14:54:06.801741 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-08-29 14:54:06.801749 | orchestrator | Friday 29 August 2025 14:52:15 +0000 (0:00:00.069) 0:01:04.691 ********* 2025-08-29 14:54:06.801757 | orchestrator | 2025-08-29 14:54:06.801764 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-08-29 14:54:06.801772 | orchestrator | Friday 29 August 2025 14:52:15 +0000 (0:00:00.066) 0:01:04.757 ********* 2025-08-29 14:54:06.801780 | orchestrator | 2025-08-29 14:54:06.801788 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-08-29 14:54:06.801796 | orchestrator | Friday 29 August 2025 14:52:15 +0000 (0:00:00.239) 0:01:04.996 ********* 2025-08-29 14:54:06.801804 | orchestrator | 2025-08-29 14:54:06.801812 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-08-29 14:54:06.801820 | orchestrator | Friday 29 August 2025 14:52:15 +0000 (0:00:00.070) 0:01:05.066 ********* 2025-08-29 14:54:06.801828 | orchestrator | 2025-08-29 14:54:06.801836 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-08-29 14:54:06.801844 | orchestrator | Friday 29 August 2025 14:52:15 +0000 (0:00:00.067) 0:01:05.134 ********* 2025-08-29 14:54:06.801852 | orchestrator | 2025-08-29 14:54:06.801860 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-08-29 14:54:06.801868 | orchestrator | Friday 29 August 2025 14:52:15 +0000 (0:00:00.087) 0:01:05.221 ********* 2025-08-29 14:54:06.801876 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:54:06.801883 | orchestrator | changed: [testbed-manager] 2025-08-29 14:54:06.801892 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:54:06.801905 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:54:06.801913 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:54:06.801920 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:54:06.801928 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:54:06.801936 | orchestrator | 2025-08-29 14:54:06.801944 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-08-29 14:54:06.801952 | orchestrator | Friday 29 August 2025 14:52:59 +0000 (0:00:43.154) 0:01:48.376 ********* 2025-08-29 14:54:06.801960 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:54:06.801968 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:54:06.801976 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:54:06.801983 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:54:06.801991 | orchestrator | changed: [testbed-manager] 2025-08-29 14:54:06.801999 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:54:06.802007 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:54:06.802038 | orchestrator | 2025-08-29 14:54:06.802048 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-08-29 14:54:06.802056 | orchestrator | Friday 29 August 2025 14:53:54 +0000 (0:00:55.791) 0:02:44.168 ********* 2025-08-29 14:54:06.802065 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:54:06.802073 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:54:06.802081 | orchestrator | ok: [testbed-manager] 2025-08-29 14:54:06.802088 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:54:06.802096 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:54:06.802104 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:54:06.802112 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:54:06.802120 | orchestrator | 2025-08-29 14:54:06.802128 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-08-29 14:54:06.802136 | orchestrator | Friday 29 August 2025 14:53:57 +0000 (0:00:02.496) 0:02:46.665 ********* 2025-08-29 14:54:06.802144 | orchestrator | changed: [testbed-manager] 2025-08-29 14:54:06.802152 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:54:06.802160 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:54:06.802167 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:54:06.802175 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:54:06.802183 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:54:06.802204 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:54:06.802212 | orchestrator | 2025-08-29 14:54:06.802220 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:54:06.802230 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-08-29 14:54:06.802238 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-08-29 14:54:06.802256 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-08-29 14:54:06.802264 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-08-29 14:54:06.802272 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-08-29 14:54:06.802280 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-08-29 14:54:06.802288 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-08-29 14:54:06.802296 | orchestrator | 2025-08-29 14:54:06.802304 | orchestrator | 2025-08-29 14:54:06.802312 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:54:06.802320 | orchestrator | Friday 29 August 2025 14:54:05 +0000 (0:00:08.482) 0:02:55.147 ********* 2025-08-29 14:54:06.802333 | orchestrator | =============================================================================== 2025-08-29 14:54:06.802341 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 55.79s 2025-08-29 14:54:06.802349 | orchestrator | common : Restart fluentd container ------------------------------------- 43.15s 2025-08-29 14:54:06.802356 | orchestrator | common : Copying over config.json files for services ------------------- 10.41s 2025-08-29 14:54:06.802364 | orchestrator | common : Restart cron container ----------------------------------------- 8.48s 2025-08-29 14:54:06.802372 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.46s 2025-08-29 14:54:06.802380 | orchestrator | common : Check common containers ---------------------------------------- 4.33s 2025-08-29 14:54:06.802388 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 4.30s 2025-08-29 14:54:06.802396 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.02s 2025-08-29 14:54:06.802404 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.73s 2025-08-29 14:54:06.802411 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 3.63s 2025-08-29 14:54:06.802419 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.46s 2025-08-29 14:54:06.802427 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.97s 2025-08-29 14:54:06.802435 | orchestrator | common : Find custom fluentd input config files ------------------------- 2.76s 2025-08-29 14:54:06.802443 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.76s 2025-08-29 14:54:06.802451 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.50s 2025-08-29 14:54:06.802458 | orchestrator | common : Creating log volume -------------------------------------------- 2.37s 2025-08-29 14:54:06.802466 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.28s 2025-08-29 14:54:06.802474 | orchestrator | common : include_tasks -------------------------------------------------- 1.74s 2025-08-29 14:54:06.802482 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.73s 2025-08-29 14:54:06.802490 | orchestrator | common : Copying over /run subdirectories conf -------------------------- 1.44s 2025-08-29 14:54:06.802498 | orchestrator | 2025-08-29 14:54:06 | INFO  | Task a38a0f58-cb24-45c4-905f-3efe3e8d7845 is in state SUCCESS 2025-08-29 14:54:06.802506 | orchestrator | 2025-08-29 14:54:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:09.836897 | orchestrator | 2025-08-29 14:54:09 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:54:09.837002 | orchestrator | 2025-08-29 14:54:09 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:54:09.839450 | orchestrator | 2025-08-29 14:54:09 | INFO  | Task cd597b0e-a0f1-4bae-9330-495c2fead121 is in state STARTED 2025-08-29 14:54:09.840123 | orchestrator | 2025-08-29 14:54:09 | INFO  | Task 94fc4823-b260-4d39-89f7-fbbef41a8b85 is in state STARTED 2025-08-29 14:54:09.841059 | orchestrator | 2025-08-29 14:54:09 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:54:09.842291 | orchestrator | 2025-08-29 14:54:09 | INFO  | Task 461d8df3-b732-4a0d-958d-5ed5fb031316 is in state STARTED 2025-08-29 14:54:09.842404 | orchestrator | 2025-08-29 14:54:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:12.914976 | orchestrator | 2025-08-29 14:54:12 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:54:12.915091 | orchestrator | 2025-08-29 14:54:12 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:54:12.915102 | orchestrator | 2025-08-29 14:54:12 | INFO  | Task cd597b0e-a0f1-4bae-9330-495c2fead121 is in state STARTED 2025-08-29 14:54:12.915110 | orchestrator | 2025-08-29 14:54:12 | INFO  | Task 94fc4823-b260-4d39-89f7-fbbef41a8b85 is in state STARTED 2025-08-29 14:54:12.915144 | orchestrator | 2025-08-29 14:54:12 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:54:12.915152 | orchestrator | 2025-08-29 14:54:12 | INFO  | Task 461d8df3-b732-4a0d-958d-5ed5fb031316 is in state STARTED 2025-08-29 14:54:12.915159 | orchestrator | 2025-08-29 14:54:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:15.921985 | orchestrator | 2025-08-29 14:54:15 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:54:15.922169 | orchestrator | 2025-08-29 14:54:15 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:54:15.922910 | orchestrator | 2025-08-29 14:54:15 | INFO  | Task cd597b0e-a0f1-4bae-9330-495c2fead121 is in state STARTED 2025-08-29 14:54:15.923530 | orchestrator | 2025-08-29 14:54:15 | INFO  | Task 94fc4823-b260-4d39-89f7-fbbef41a8b85 is in state STARTED 2025-08-29 14:54:15.924439 | orchestrator | 2025-08-29 14:54:15 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:54:15.925593 | orchestrator | 2025-08-29 14:54:15 | INFO  | Task 461d8df3-b732-4a0d-958d-5ed5fb031316 is in state STARTED 2025-08-29 14:54:15.925641 | orchestrator | 2025-08-29 14:54:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:18.971686 | orchestrator | 2025-08-29 14:54:18 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:54:18.971766 | orchestrator | 2025-08-29 14:54:18 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:54:18.971776 | orchestrator | 2025-08-29 14:54:18 | INFO  | Task cd597b0e-a0f1-4bae-9330-495c2fead121 is in state STARTED 2025-08-29 14:54:18.971785 | orchestrator | 2025-08-29 14:54:18 | INFO  | Task 94fc4823-b260-4d39-89f7-fbbef41a8b85 is in state STARTED 2025-08-29 14:54:18.971793 | orchestrator | 2025-08-29 14:54:18 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:54:18.971800 | orchestrator | 2025-08-29 14:54:18 | INFO  | Task 461d8df3-b732-4a0d-958d-5ed5fb031316 is in state STARTED 2025-08-29 14:54:18.971808 | orchestrator | 2025-08-29 14:54:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:21.999245 | orchestrator | 2025-08-29 14:54:21 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:54:21.999939 | orchestrator | 2025-08-29 14:54:21 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:54:22.001207 | orchestrator | 2025-08-29 14:54:22 | INFO  | Task cd597b0e-a0f1-4bae-9330-495c2fead121 is in state STARTED 2025-08-29 14:54:22.002049 | orchestrator | 2025-08-29 14:54:22 | INFO  | Task 94fc4823-b260-4d39-89f7-fbbef41a8b85 is in state STARTED 2025-08-29 14:54:22.003447 | orchestrator | 2025-08-29 14:54:22 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:54:22.004838 | orchestrator | 2025-08-29 14:54:22 | INFO  | Task 461d8df3-b732-4a0d-958d-5ed5fb031316 is in state STARTED 2025-08-29 14:54:22.005151 | orchestrator | 2025-08-29 14:54:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:25.092346 | orchestrator | 2025-08-29 14:54:25 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:54:25.092645 | orchestrator | 2025-08-29 14:54:25 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:54:25.093480 | orchestrator | 2025-08-29 14:54:25 | INFO  | Task cd597b0e-a0f1-4bae-9330-495c2fead121 is in state STARTED 2025-08-29 14:54:25.094136 | orchestrator | 2025-08-29 14:54:25 | INFO  | Task 94fc4823-b260-4d39-89f7-fbbef41a8b85 is in state STARTED 2025-08-29 14:54:25.096814 | orchestrator | 2025-08-29 14:54:25 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:54:25.097298 | orchestrator | 2025-08-29 14:54:25 | INFO  | Task 461d8df3-b732-4a0d-958d-5ed5fb031316 is in state STARTED 2025-08-29 14:54:25.097327 | orchestrator | 2025-08-29 14:54:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:28.158814 | orchestrator | 2025-08-29 14:54:28 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:54:28.161896 | orchestrator | 2025-08-29 14:54:28 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:54:28.163833 | orchestrator | 2025-08-29 14:54:28 | INFO  | Task cd597b0e-a0f1-4bae-9330-495c2fead121 is in state SUCCESS 2025-08-29 14:54:28.166115 | orchestrator | 2025-08-29 14:54:28 | INFO  | Task c7dd3c80-30cb-48a6-95ab-20087ca8ed2a is in state STARTED 2025-08-29 14:54:28.171528 | orchestrator | 2025-08-29 14:54:28 | INFO  | Task 94fc4823-b260-4d39-89f7-fbbef41a8b85 is in state STARTED 2025-08-29 14:54:28.175697 | orchestrator | 2025-08-29 14:54:28 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:54:28.179156 | orchestrator | 2025-08-29 14:54:28 | INFO  | Task 461d8df3-b732-4a0d-958d-5ed5fb031316 is in state STARTED 2025-08-29 14:54:28.179249 | orchestrator | 2025-08-29 14:54:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:31.397519 | orchestrator | 2025-08-29 14:54:31 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:54:31.397650 | orchestrator | 2025-08-29 14:54:31 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:54:31.397665 | orchestrator | 2025-08-29 14:54:31 | INFO  | Task c7dd3c80-30cb-48a6-95ab-20087ca8ed2a is in state STARTED 2025-08-29 14:54:31.397903 | orchestrator | 2025-08-29 14:54:31 | INFO  | Task 94fc4823-b260-4d39-89f7-fbbef41a8b85 is in state STARTED 2025-08-29 14:54:31.398622 | orchestrator | 2025-08-29 14:54:31 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:54:31.399540 | orchestrator | 2025-08-29 14:54:31 | INFO  | Task 461d8df3-b732-4a0d-958d-5ed5fb031316 is in state STARTED 2025-08-29 14:54:31.399561 | orchestrator | 2025-08-29 14:54:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:34.458325 | orchestrator | 2025-08-29 14:54:34 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:54:34.459129 | orchestrator | 2025-08-29 14:54:34 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:54:34.460571 | orchestrator | 2025-08-29 14:54:34 | INFO  | Task c7dd3c80-30cb-48a6-95ab-20087ca8ed2a is in state STARTED 2025-08-29 14:54:34.462641 | orchestrator | 2025-08-29 14:54:34 | INFO  | Task 94fc4823-b260-4d39-89f7-fbbef41a8b85 is in state STARTED 2025-08-29 14:54:34.463290 | orchestrator | 2025-08-29 14:54:34 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:54:34.464709 | orchestrator | 2025-08-29 14:54:34 | INFO  | Task 461d8df3-b732-4a0d-958d-5ed5fb031316 is in state STARTED 2025-08-29 14:54:34.465730 | orchestrator | 2025-08-29 14:54:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:37.500066 | orchestrator | 2025-08-29 14:54:37 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:54:37.500450 | orchestrator | 2025-08-29 14:54:37 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:54:37.501119 | orchestrator | 2025-08-29 14:54:37 | INFO  | Task c7dd3c80-30cb-48a6-95ab-20087ca8ed2a is in state STARTED 2025-08-29 14:54:37.502080 | orchestrator | 2025-08-29 14:54:37 | INFO  | Task 94fc4823-b260-4d39-89f7-fbbef41a8b85 is in state STARTED 2025-08-29 14:54:37.502713 | orchestrator | 2025-08-29 14:54:37 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:54:37.505535 | orchestrator | 2025-08-29 14:54:37 | INFO  | Task 461d8df3-b732-4a0d-958d-5ed5fb031316 is in state STARTED 2025-08-29 14:54:37.505719 | orchestrator | 2025-08-29 14:54:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:40.603036 | orchestrator | 2025-08-29 14:54:40 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:54:40.603110 | orchestrator | 2025-08-29 14:54:40 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:54:40.603116 | orchestrator | 2025-08-29 14:54:40 | INFO  | Task c7dd3c80-30cb-48a6-95ab-20087ca8ed2a is in state STARTED 2025-08-29 14:54:40.603120 | orchestrator | 2025-08-29 14:54:40 | INFO  | Task 94fc4823-b260-4d39-89f7-fbbef41a8b85 is in state STARTED 2025-08-29 14:54:40.603124 | orchestrator | 2025-08-29 14:54:40 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:54:40.603128 | orchestrator | 2025-08-29 14:54:40 | INFO  | Task 461d8df3-b732-4a0d-958d-5ed5fb031316 is in state STARTED 2025-08-29 14:54:40.603133 | orchestrator | 2025-08-29 14:54:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:43.663377 | orchestrator | 2025-08-29 14:54:43 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:54:43.663872 | orchestrator | 2025-08-29 14:54:43 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:54:43.676298 | orchestrator | 2025-08-29 14:54:43 | INFO  | Task c7dd3c80-30cb-48a6-95ab-20087ca8ed2a is in state STARTED 2025-08-29 14:54:43.681322 | orchestrator | 2025-08-29 14:54:43 | INFO  | Task 94fc4823-b260-4d39-89f7-fbbef41a8b85 is in state STARTED 2025-08-29 14:54:43.688377 | orchestrator | 2025-08-29 14:54:43 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:54:43.694945 | orchestrator | 2025-08-29 14:54:43 | INFO  | Task 461d8df3-b732-4a0d-958d-5ed5fb031316 is in state SUCCESS 2025-08-29 14:54:43.695764 | orchestrator | 2025-08-29 14:54:43.695798 | orchestrator | 2025-08-29 14:54:43.695807 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 14:54:43.695816 | orchestrator | 2025-08-29 14:54:43.695824 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 14:54:43.695832 | orchestrator | Friday 29 August 2025 14:54:11 +0000 (0:00:00.301) 0:00:00.301 ********* 2025-08-29 14:54:43.695840 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:54:43.695848 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:54:43.695856 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:54:43.695862 | orchestrator | 2025-08-29 14:54:43.695869 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 14:54:43.695876 | orchestrator | Friday 29 August 2025 14:54:11 +0000 (0:00:00.362) 0:00:00.664 ********* 2025-08-29 14:54:43.695885 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-08-29 14:54:43.695892 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-08-29 14:54:43.695899 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-08-29 14:54:43.695907 | orchestrator | 2025-08-29 14:54:43.695914 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-08-29 14:54:43.695921 | orchestrator | 2025-08-29 14:54:43.695928 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-08-29 14:54:43.695934 | orchestrator | Friday 29 August 2025 14:54:12 +0000 (0:00:00.350) 0:00:01.014 ********* 2025-08-29 14:54:43.695942 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:54:43.695973 | orchestrator | 2025-08-29 14:54:43.695981 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-08-29 14:54:43.695988 | orchestrator | Friday 29 August 2025 14:54:12 +0000 (0:00:00.504) 0:00:01.519 ********* 2025-08-29 14:54:43.695995 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-08-29 14:54:43.696002 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-08-29 14:54:43.696010 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-08-29 14:54:43.696017 | orchestrator | 2025-08-29 14:54:43.696024 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-08-29 14:54:43.696031 | orchestrator | Friday 29 August 2025 14:54:14 +0000 (0:00:01.300) 0:00:02.819 ********* 2025-08-29 14:54:43.696038 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-08-29 14:54:43.696045 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-08-29 14:54:43.696052 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-08-29 14:54:43.696059 | orchestrator | 2025-08-29 14:54:43.696065 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-08-29 14:54:43.696072 | orchestrator | Friday 29 August 2025 14:54:16 +0000 (0:00:01.957) 0:00:04.777 ********* 2025-08-29 14:54:43.696079 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:54:43.696085 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:54:43.696091 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:54:43.696096 | orchestrator | 2025-08-29 14:54:43.696102 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-08-29 14:54:43.696108 | orchestrator | Friday 29 August 2025 14:54:18 +0000 (0:00:01.980) 0:00:06.758 ********* 2025-08-29 14:54:43.696114 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:54:43.696122 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:54:43.696129 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:54:43.696135 | orchestrator | 2025-08-29 14:54:43.696142 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:54:43.696150 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:54:43.696183 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:54:43.696191 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:54:43.696197 | orchestrator | 2025-08-29 14:54:43.696203 | orchestrator | 2025-08-29 14:54:43.696210 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:54:43.696217 | orchestrator | Friday 29 August 2025 14:54:25 +0000 (0:00:07.906) 0:00:14.664 ********* 2025-08-29 14:54:43.696224 | orchestrator | =============================================================================== 2025-08-29 14:54:43.696231 | orchestrator | memcached : Restart memcached container --------------------------------- 7.91s 2025-08-29 14:54:43.696238 | orchestrator | memcached : Check memcached container ----------------------------------- 1.98s 2025-08-29 14:54:43.696245 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.96s 2025-08-29 14:54:43.696252 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.30s 2025-08-29 14:54:43.696259 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.50s 2025-08-29 14:54:43.696266 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.36s 2025-08-29 14:54:43.696274 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.35s 2025-08-29 14:54:43.696280 | orchestrator | 2025-08-29 14:54:43.696286 | orchestrator | 2025-08-29 14:54:43.696293 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 14:54:43.696299 | orchestrator | 2025-08-29 14:54:43.696305 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 14:54:43.696333 | orchestrator | Friday 29 August 2025 14:54:12 +0000 (0:00:00.258) 0:00:00.258 ********* 2025-08-29 14:54:43.696340 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:54:43.696346 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:54:43.696352 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:54:43.696358 | orchestrator | 2025-08-29 14:54:43.696365 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 14:54:43.696384 | orchestrator | Friday 29 August 2025 14:54:12 +0000 (0:00:00.339) 0:00:00.597 ********* 2025-08-29 14:54:43.696390 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-08-29 14:54:43.696396 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-08-29 14:54:43.696402 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-08-29 14:54:43.696408 | orchestrator | 2025-08-29 14:54:43.696413 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-08-29 14:54:43.696420 | orchestrator | 2025-08-29 14:54:43.696426 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-08-29 14:54:43.696432 | orchestrator | Friday 29 August 2025 14:54:13 +0000 (0:00:01.112) 0:00:01.710 ********* 2025-08-29 14:54:43.696438 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:54:43.696445 | orchestrator | 2025-08-29 14:54:43.696451 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-08-29 14:54:43.696457 | orchestrator | Friday 29 August 2025 14:54:14 +0000 (0:00:00.713) 0:00:02.424 ********* 2025-08-29 14:54:43.696467 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 14:54:43.696479 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 14:54:43.696487 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 14:54:43.696494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 14:54:43.696508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 14:54:43.696529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 14:54:43.696537 | orchestrator | 2025-08-29 14:54:43.696543 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-08-29 14:54:43.696553 | orchestrator | Friday 29 August 2025 14:54:16 +0000 (0:00:01.384) 0:00:03.808 ********* 2025-08-29 14:54:43.696560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 14:54:43.696568 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 14:54:43.696575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 14:54:43.696583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 14:54:43.696595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 14:54:43.696611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 14:54:43.696618 | orchestrator | 2025-08-29 14:54:43.696624 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-08-29 14:54:43.696631 | orchestrator | Friday 29 August 2025 14:54:18 +0000 (0:00:02.587) 0:00:06.395 ********* 2025-08-29 14:54:43.696638 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 14:54:43.696644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 14:54:43.696651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 14:54:43.696659 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 14:54:43.696670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 14:54:43.696686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 14:54:43.696693 | orchestrator | 2025-08-29 14:54:43.696703 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-08-29 14:54:43.696709 | orchestrator | Friday 29 August 2025 14:54:21 +0000 (0:00:02.943) 0:00:09.339 ********* 2025-08-29 14:54:43.696715 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 14:54:43.696721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 14:54:43.696727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-08-29 14:54:43.696733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 14:54:43.696747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 14:54:43.696756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-08-29 14:54:43.696762 | orchestrator | 2025-08-29 14:54:43.696768 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-08-29 14:54:43.696775 | orchestrator | Friday 29 August 2025 14:54:23 +0000 (0:00:02.176) 0:00:11.515 ********* 2025-08-29 14:54:43.696780 | orchestrator | 2025-08-29 14:54:43.696787 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-08-29 14:54:43.696796 | orchestrator | Friday 29 August 2025 14:54:23 +0000 (0:00:00.185) 0:00:11.701 ********* 2025-08-29 14:54:43.696802 | orchestrator | 2025-08-29 14:54:43.696807 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-08-29 14:54:43.696815 | orchestrator | Friday 29 August 2025 14:54:24 +0000 (0:00:00.144) 0:00:11.846 ********* 2025-08-29 14:54:43.696820 | orchestrator | 2025-08-29 14:54:43.696826 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-08-29 14:54:43.696832 | orchestrator | Friday 29 August 2025 14:54:24 +0000 (0:00:00.194) 0:00:12.040 ********* 2025-08-29 14:54:43.696838 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:54:43.696845 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:54:43.696850 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:54:43.696856 | orchestrator | 2025-08-29 14:54:43.696862 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-08-29 14:54:43.696867 | orchestrator | Friday 29 August 2025 14:54:32 +0000 (0:00:08.014) 0:00:20.055 ********* 2025-08-29 14:54:43.696873 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:54:43.696879 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:54:43.696885 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:54:43.696890 | orchestrator | 2025-08-29 14:54:43.696896 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:54:43.696902 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:54:43.696910 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:54:43.696916 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:54:43.696922 | orchestrator | 2025-08-29 14:54:43.696928 | orchestrator | 2025-08-29 14:54:43.696934 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:54:43.696945 | orchestrator | Friday 29 August 2025 14:54:42 +0000 (0:00:10.472) 0:00:30.527 ********* 2025-08-29 14:54:43.696952 | orchestrator | =============================================================================== 2025-08-29 14:54:43.696958 | orchestrator | redis : Restart redis-sentinel container ------------------------------- 10.47s 2025-08-29 14:54:43.696964 | orchestrator | redis : Restart redis container ----------------------------------------- 8.01s 2025-08-29 14:54:43.696970 | orchestrator | redis : Copying over redis config files --------------------------------- 2.94s 2025-08-29 14:54:43.696975 | orchestrator | redis : Copying over default config.json files -------------------------- 2.59s 2025-08-29 14:54:43.696981 | orchestrator | redis : Check redis containers ------------------------------------------ 2.18s 2025-08-29 14:54:43.696987 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.38s 2025-08-29 14:54:43.696993 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.11s 2025-08-29 14:54:43.696998 | orchestrator | redis : include_tasks --------------------------------------------------- 0.71s 2025-08-29 14:54:43.697004 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.52s 2025-08-29 14:54:43.697010 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2025-08-29 14:54:43.697016 | orchestrator | 2025-08-29 14:54:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:47.140122 | orchestrator | 2025-08-29 14:54:47 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:54:47.140310 | orchestrator | 2025-08-29 14:54:47 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:54:47.141114 | orchestrator | 2025-08-29 14:54:47 | INFO  | Task c7dd3c80-30cb-48a6-95ab-20087ca8ed2a is in state STARTED 2025-08-29 14:54:47.142381 | orchestrator | 2025-08-29 14:54:47 | INFO  | Task 94fc4823-b260-4d39-89f7-fbbef41a8b85 is in state STARTED 2025-08-29 14:54:47.144836 | orchestrator | 2025-08-29 14:54:47 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:54:47.144881 | orchestrator | 2025-08-29 14:54:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:50.200329 | orchestrator | 2025-08-29 14:54:50 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:54:50.200440 | orchestrator | 2025-08-29 14:54:50 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:54:50.200458 | orchestrator | 2025-08-29 14:54:50 | INFO  | Task c7dd3c80-30cb-48a6-95ab-20087ca8ed2a is in state STARTED 2025-08-29 14:54:50.200471 | orchestrator | 2025-08-29 14:54:50 | INFO  | Task 94fc4823-b260-4d39-89f7-fbbef41a8b85 is in state STARTED 2025-08-29 14:54:50.200485 | orchestrator | 2025-08-29 14:54:50 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:54:50.200514 | orchestrator | 2025-08-29 14:54:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:53.497093 | orchestrator | 2025-08-29 14:54:53 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:54:53.497454 | orchestrator | 2025-08-29 14:54:53 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:54:53.499073 | orchestrator | 2025-08-29 14:54:53 | INFO  | Task c7dd3c80-30cb-48a6-95ab-20087ca8ed2a is in state STARTED 2025-08-29 14:54:53.500376 | orchestrator | 2025-08-29 14:54:53 | INFO  | Task 94fc4823-b260-4d39-89f7-fbbef41a8b85 is in state STARTED 2025-08-29 14:54:53.502910 | orchestrator | 2025-08-29 14:54:53 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:54:53.502959 | orchestrator | 2025-08-29 14:54:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:56.531079 | orchestrator | 2025-08-29 14:54:56 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state STARTED 2025-08-29 14:54:56.531753 | orchestrator | 2025-08-29 14:54:56 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:54:56.535753 | orchestrator | 2025-08-29 14:54:56 | INFO  | Task c7dd3c80-30cb-48a6-95ab-20087ca8ed2a is in state STARTED 2025-08-29 14:54:56.536307 | orchestrator | 2025-08-29 14:54:56 | INFO  | Task 94fc4823-b260-4d39-89f7-fbbef41a8b85 is in state STARTED 2025-08-29 14:54:56.536990 | orchestrator | 2025-08-29 14:54:56 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:54:56.537138 | orchestrator | 2025-08-29 14:54:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:54:59.824408 | orchestrator | 2025-08-29 14:54:59 | INFO  | Task fce05841-e576-462c-a723-efbbff8d10ed is in state STARTED 2025-08-29 14:54:59.825662 | orchestrator | 2025-08-29 14:54:59 | INFO  | Task e8b81cbc-6cd4-4c3b-9756-24eba2d1e39e is in state SUCCESS 2025-08-29 14:54:59.827293 | orchestrator | 2025-08-29 14:54:59.827330 | orchestrator | 2025-08-29 14:54:59.827338 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-08-29 14:54:59.827346 | orchestrator | 2025-08-29 14:54:59.827353 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-08-29 14:54:59.827360 | orchestrator | Friday 29 August 2025 14:51:12 +0000 (0:00:00.307) 0:00:00.307 ********* 2025-08-29 14:54:59.827366 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:54:59.827374 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:54:59.827381 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:54:59.827387 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:54:59.827393 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:54:59.827400 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:54:59.827406 | orchestrator | 2025-08-29 14:54:59.827412 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-08-29 14:54:59.827419 | orchestrator | Friday 29 August 2025 14:51:13 +0000 (0:00:00.934) 0:00:01.242 ********* 2025-08-29 14:54:59.827425 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:54:59.827433 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:54:59.827439 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:54:59.827445 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:59.827451 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:54:59.827458 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:54:59.827464 | orchestrator | 2025-08-29 14:54:59.827470 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-08-29 14:54:59.827477 | orchestrator | Friday 29 August 2025 14:51:13 +0000 (0:00:00.602) 0:00:01.845 ********* 2025-08-29 14:54:59.827483 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:54:59.827490 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:54:59.827496 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:54:59.827502 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:59.827509 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:54:59.827515 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:54:59.827521 | orchestrator | 2025-08-29 14:54:59.827527 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-08-29 14:54:59.827533 | orchestrator | Friday 29 August 2025 14:51:14 +0000 (0:00:00.834) 0:00:02.680 ********* 2025-08-29 14:54:59.827540 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:54:59.827546 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:54:59.827552 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:54:59.827558 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:54:59.827564 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:54:59.827571 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:54:59.827577 | orchestrator | 2025-08-29 14:54:59.827583 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-08-29 14:54:59.827589 | orchestrator | Friday 29 August 2025 14:51:16 +0000 (0:00:01.922) 0:00:04.603 ********* 2025-08-29 14:54:59.827610 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:54:59.827617 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:54:59.827623 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:54:59.827629 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:54:59.827635 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:54:59.827641 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:54:59.827647 | orchestrator | 2025-08-29 14:54:59.827653 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-08-29 14:54:59.827660 | orchestrator | Friday 29 August 2025 14:51:17 +0000 (0:00:01.063) 0:00:05.667 ********* 2025-08-29 14:54:59.827666 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:54:59.827672 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:54:59.827678 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:54:59.827684 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:54:59.827690 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:54:59.827696 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:54:59.827702 | orchestrator | 2025-08-29 14:54:59.827713 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-08-29 14:54:59.827720 | orchestrator | Friday 29 August 2025 14:51:18 +0000 (0:00:01.259) 0:00:06.926 ********* 2025-08-29 14:54:59.827726 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:54:59.827732 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:54:59.827738 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:54:59.827744 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:59.827785 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:54:59.827792 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:54:59.827798 | orchestrator | 2025-08-29 14:54:59.827804 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-08-29 14:54:59.827811 | orchestrator | Friday 29 August 2025 14:51:19 +0000 (0:00:00.842) 0:00:07.769 ********* 2025-08-29 14:54:59.827817 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:54:59.827849 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:54:59.827856 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:54:59.827862 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:59.827868 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:54:59.827874 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:54:59.827880 | orchestrator | 2025-08-29 14:54:59.827888 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-08-29 14:54:59.827895 | orchestrator | Friday 29 August 2025 14:51:20 +0000 (0:00:00.939) 0:00:08.709 ********* 2025-08-29 14:54:59.827902 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 14:54:59.827909 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 14:54:59.827916 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:54:59.827923 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 14:54:59.827930 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 14:54:59.827937 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 14:54:59.827944 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 14:54:59.827951 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:54:59.827958 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 14:54:59.827965 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 14:54:59.827990 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:54:59.827998 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 14:54:59.828004 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 14:54:59.828010 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:59.828016 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:54:59.828028 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 14:54:59.828035 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 14:54:59.828041 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:54:59.828047 | orchestrator | 2025-08-29 14:54:59.828053 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-08-29 14:54:59.828059 | orchestrator | Friday 29 August 2025 14:51:21 +0000 (0:00:01.358) 0:00:10.068 ********* 2025-08-29 14:54:59.828065 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:54:59.828072 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:54:59.828078 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:54:59.828084 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:59.828090 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:54:59.828096 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:54:59.828102 | orchestrator | 2025-08-29 14:54:59.828108 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-08-29 14:54:59.828115 | orchestrator | Friday 29 August 2025 14:51:23 +0000 (0:00:01.481) 0:00:11.550 ********* 2025-08-29 14:54:59.828121 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:54:59.828127 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:54:59.828133 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:54:59.828139 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:54:59.828145 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:54:59.828170 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:54:59.828176 | orchestrator | 2025-08-29 14:54:59.828182 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-08-29 14:54:59.828188 | orchestrator | Friday 29 August 2025 14:51:24 +0000 (0:00:01.601) 0:00:13.151 ********* 2025-08-29 14:54:59.828194 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:54:59.828201 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:54:59.828207 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:54:59.828213 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:54:59.828219 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:54:59.828225 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:54:59.828231 | orchestrator | 2025-08-29 14:54:59.828237 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-08-29 14:54:59.828243 | orchestrator | Friday 29 August 2025 14:51:30 +0000 (0:00:05.557) 0:00:18.709 ********* 2025-08-29 14:54:59.828249 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:54:59.828255 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:54:59.828261 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:54:59.828267 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:59.828274 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:54:59.828280 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:54:59.828286 | orchestrator | 2025-08-29 14:54:59.828294 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-08-29 14:54:59.828304 | orchestrator | Friday 29 August 2025 14:51:31 +0000 (0:00:01.200) 0:00:19.909 ********* 2025-08-29 14:54:59.828314 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:54:59.828323 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:54:59.828332 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:54:59.828341 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:59.828350 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:54:59.828360 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:54:59.828369 | orchestrator | 2025-08-29 14:54:59.828380 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-08-29 14:54:59.828392 | orchestrator | Friday 29 August 2025 14:51:34 +0000 (0:00:03.199) 0:00:23.109 ********* 2025-08-29 14:54:59.828402 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:54:59.828410 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:54:59.828417 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:54:59.828423 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:54:59.828435 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:54:59.828442 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:54:59.828448 | orchestrator | 2025-08-29 14:54:59.828454 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-08-29 14:54:59.828460 | orchestrator | Friday 29 August 2025 14:51:36 +0000 (0:00:01.927) 0:00:25.036 ********* 2025-08-29 14:54:59.828466 | orchestrator | changed: [testbed-node-3] => (item=rancher) 2025-08-29 14:54:59.828473 | orchestrator | changed: [testbed-node-4] => (item=rancher) 2025-08-29 14:54:59.828479 | orchestrator | changed: [testbed-node-5] => (item=rancher) 2025-08-29 14:54:59.828485 | orchestrator | changed: [testbed-node-0] => (item=rancher) 2025-08-29 14:54:59.828491 | orchestrator | changed: [testbed-node-3] => (item=rancher/k3s) 2025-08-29 14:54:59.828498 | orchestrator | changed: [testbed-node-5] => (item=rancher/k3s) 2025-08-29 14:54:59.828504 | orchestrator | changed: [testbed-node-0] => (item=rancher/k3s) 2025-08-29 14:54:59.828510 | orchestrator | changed: [testbed-node-4] => (item=rancher/k3s) 2025-08-29 14:54:59.828516 | orchestrator | changed: [testbed-node-2] => (item=rancher) 2025-08-29 14:54:59.828522 | orchestrator | changed: [testbed-node-1] => (item=rancher) 2025-08-29 14:54:59.828528 | orchestrator | changed: [testbed-node-2] => (item=rancher/k3s) 2025-08-29 14:54:59.828534 | orchestrator | changed: [testbed-node-1] => (item=rancher/k3s) 2025-08-29 14:54:59.828540 | orchestrator | 2025-08-29 14:54:59.828546 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-08-29 14:54:59.828552 | orchestrator | Friday 29 August 2025 14:51:40 +0000 (0:00:03.739) 0:00:28.775 ********* 2025-08-29 14:54:59.828558 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:54:59.828564 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:54:59.828570 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:54:59.828576 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:54:59.828582 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:54:59.828588 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:54:59.828595 | orchestrator | 2025-08-29 14:54:59.828605 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-08-29 14:54:59.828612 | orchestrator | 2025-08-29 14:54:59.828618 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-08-29 14:54:59.828624 | orchestrator | Friday 29 August 2025 14:51:43 +0000 (0:00:03.311) 0:00:32.087 ********* 2025-08-29 14:54:59.828631 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:54:59.828637 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:54:59.828643 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:54:59.828649 | orchestrator | 2025-08-29 14:54:59.828655 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-08-29 14:54:59.828661 | orchestrator | Friday 29 August 2025 14:51:45 +0000 (0:00:01.382) 0:00:33.470 ********* 2025-08-29 14:54:59.828667 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:54:59.828673 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:54:59.828679 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:54:59.828685 | orchestrator | 2025-08-29 14:54:59.828692 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-08-29 14:54:59.828698 | orchestrator | Friday 29 August 2025 14:51:47 +0000 (0:00:01.909) 0:00:35.380 ********* 2025-08-29 14:54:59.828704 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:54:59.828734 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:54:59.828742 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:54:59.828748 | orchestrator | 2025-08-29 14:54:59.828754 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-08-29 14:54:59.828761 | orchestrator | Friday 29 August 2025 14:51:48 +0000 (0:00:01.089) 0:00:36.469 ********* 2025-08-29 14:54:59.828767 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:54:59.828773 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:54:59.828779 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:54:59.828785 | orchestrator | 2025-08-29 14:54:59.828791 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-08-29 14:54:59.828798 | orchestrator | Friday 29 August 2025 14:51:49 +0000 (0:00:01.332) 0:00:37.802 ********* 2025-08-29 14:54:59.828809 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:59.828816 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:54:59.828822 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:54:59.828828 | orchestrator | 2025-08-29 14:54:59.828834 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2025-08-29 14:54:59.828840 | orchestrator | Friday 29 August 2025 14:51:50 +0000 (0:00:00.621) 0:00:38.424 ********* 2025-08-29 14:54:59.828846 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:54:59.828853 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:54:59.828859 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:54:59.828865 | orchestrator | 2025-08-29 14:54:59.828871 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2025-08-29 14:54:59.828877 | orchestrator | Friday 29 August 2025 14:51:51 +0000 (0:00:00.890) 0:00:39.314 ********* 2025-08-29 14:54:59.828883 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:54:59.828890 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:54:59.828896 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:54:59.828902 | orchestrator | 2025-08-29 14:54:59.828908 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-08-29 14:54:59.828914 | orchestrator | Friday 29 August 2025 14:51:52 +0000 (0:00:01.863) 0:00:41.178 ********* 2025-08-29 14:54:59.828920 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:54:59.828926 | orchestrator | 2025-08-29 14:54:59.828932 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-08-29 14:54:59.828939 | orchestrator | Friday 29 August 2025 14:51:54 +0000 (0:00:01.151) 0:00:42.330 ********* 2025-08-29 14:54:59.828945 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:54:59.828958 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:54:59.828967 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:54:59.828973 | orchestrator | 2025-08-29 14:54:59.828980 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-08-29 14:54:59.828986 | orchestrator | Friday 29 August 2025 14:51:56 +0000 (0:00:02.252) 0:00:44.583 ********* 2025-08-29 14:54:59.828992 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:54:59.828998 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:54:59.829004 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:54:59.829010 | orchestrator | 2025-08-29 14:54:59.829017 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-08-29 14:54:59.829023 | orchestrator | Friday 29 August 2025 14:51:57 +0000 (0:00:00.688) 0:00:45.271 ********* 2025-08-29 14:54:59.829029 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:54:59.829035 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:54:59.829041 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:54:59.829048 | orchestrator | 2025-08-29 14:54:59.829054 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-08-29 14:54:59.829060 | orchestrator | Friday 29 August 2025 14:51:58 +0000 (0:00:01.636) 0:00:46.908 ********* 2025-08-29 14:54:59.829066 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:54:59.829072 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:54:59.829078 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:54:59.829084 | orchestrator | 2025-08-29 14:54:59.829091 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-08-29 14:54:59.829097 | orchestrator | Friday 29 August 2025 14:52:01 +0000 (0:00:02.871) 0:00:49.780 ********* 2025-08-29 14:54:59.829103 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:59.829109 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:54:59.829115 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:54:59.829129 | orchestrator | 2025-08-29 14:54:59.829135 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-08-29 14:54:59.829141 | orchestrator | Friday 29 August 2025 14:52:02 +0000 (0:00:00.879) 0:00:50.660 ********* 2025-08-29 14:54:59.829161 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:54:59.829173 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:59.829179 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:54:59.829185 | orchestrator | 2025-08-29 14:54:59.829191 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-08-29 14:54:59.829197 | orchestrator | Friday 29 August 2025 14:52:03 +0000 (0:00:00.641) 0:00:51.302 ********* 2025-08-29 14:54:59.829204 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:54:59.829210 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:54:59.829216 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:54:59.829222 | orchestrator | 2025-08-29 14:54:59.829233 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-08-29 14:54:59.829240 | orchestrator | Friday 29 August 2025 14:52:05 +0000 (0:00:02.161) 0:00:53.463 ********* 2025-08-29 14:54:59.829246 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-08-29 14:54:59.829253 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-08-29 14:54:59.829259 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-08-29 14:54:59.829266 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-08-29 14:54:59.829272 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-08-29 14:54:59.829278 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-08-29 14:54:59.829284 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-08-29 14:54:59.829290 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-08-29 14:54:59.829297 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-08-29 14:54:59.829303 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-08-29 14:54:59.829309 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-08-29 14:54:59.829315 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-08-29 14:54:59.829322 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:54:59.829328 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:54:59.829334 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:54:59.829340 | orchestrator | 2025-08-29 14:54:59.829347 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-08-29 14:54:59.829353 | orchestrator | Friday 29 August 2025 14:52:49 +0000 (0:00:44.703) 0:01:38.167 ********* 2025-08-29 14:54:59.829359 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:59.829365 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:54:59.829372 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:54:59.829378 | orchestrator | 2025-08-29 14:54:59.829384 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-08-29 14:54:59.829394 | orchestrator | Friday 29 August 2025 14:52:50 +0000 (0:00:00.296) 0:01:38.464 ********* 2025-08-29 14:54:59.829400 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:54:59.829406 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:54:59.829412 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:54:59.829419 | orchestrator | 2025-08-29 14:54:59.829425 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-08-29 14:54:59.829435 | orchestrator | Friday 29 August 2025 14:52:51 +0000 (0:00:00.936) 0:01:39.400 ********* 2025-08-29 14:54:59.829442 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:54:59.829448 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:54:59.829454 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:54:59.829460 | orchestrator | 2025-08-29 14:54:59.829466 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-08-29 14:54:59.829473 | orchestrator | Friday 29 August 2025 14:52:52 +0000 (0:00:01.114) 0:01:40.515 ********* 2025-08-29 14:54:59.829479 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:54:59.829485 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:54:59.829491 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:54:59.829497 | orchestrator | 2025-08-29 14:54:59.829503 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-08-29 14:54:59.829510 | orchestrator | Friday 29 August 2025 14:53:18 +0000 (0:00:25.737) 0:02:06.253 ********* 2025-08-29 14:54:59.829516 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:54:59.829522 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:54:59.829528 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:54:59.829534 | orchestrator | 2025-08-29 14:54:59.829540 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-08-29 14:54:59.829547 | orchestrator | Friday 29 August 2025 14:53:18 +0000 (0:00:00.613) 0:02:06.866 ********* 2025-08-29 14:54:59.829553 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:54:59.829559 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:54:59.829565 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:54:59.829571 | orchestrator | 2025-08-29 14:54:59.829577 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-08-29 14:54:59.829584 | orchestrator | Friday 29 August 2025 14:53:19 +0000 (0:00:00.666) 0:02:07.533 ********* 2025-08-29 14:54:59.829590 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:54:59.829596 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:54:59.829602 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:54:59.829608 | orchestrator | 2025-08-29 14:54:59.829615 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-08-29 14:54:59.829621 | orchestrator | Friday 29 August 2025 14:53:19 +0000 (0:00:00.637) 0:02:08.170 ********* 2025-08-29 14:54:59.829627 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:54:59.829637 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:54:59.829644 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:54:59.829650 | orchestrator | 2025-08-29 14:54:59.829656 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-08-29 14:54:59.829672 | orchestrator | Friday 29 August 2025 14:53:20 +0000 (0:00:00.794) 0:02:08.965 ********* 2025-08-29 14:54:59.829678 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:54:59.829684 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:54:59.829690 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:54:59.829697 | orchestrator | 2025-08-29 14:54:59.829703 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-08-29 14:54:59.829709 | orchestrator | Friday 29 August 2025 14:53:21 +0000 (0:00:00.290) 0:02:09.256 ********* 2025-08-29 14:54:59.829715 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:54:59.829721 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:54:59.829727 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:54:59.829733 | orchestrator | 2025-08-29 14:54:59.829740 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-08-29 14:54:59.829746 | orchestrator | Friday 29 August 2025 14:53:21 +0000 (0:00:00.616) 0:02:09.873 ********* 2025-08-29 14:54:59.829752 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:54:59.829758 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:54:59.829764 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:54:59.829770 | orchestrator | 2025-08-29 14:54:59.829776 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-08-29 14:54:59.829782 | orchestrator | Friday 29 August 2025 14:53:22 +0000 (0:00:00.646) 0:02:10.519 ********* 2025-08-29 14:54:59.829793 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:54:59.829799 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:54:59.829805 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:54:59.829811 | orchestrator | 2025-08-29 14:54:59.829818 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-08-29 14:54:59.829824 | orchestrator | Friday 29 August 2025 14:53:23 +0000 (0:00:01.099) 0:02:11.619 ********* 2025-08-29 14:54:59.829830 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:54:59.829836 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:54:59.829842 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:54:59.829848 | orchestrator | 2025-08-29 14:54:59.829854 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-08-29 14:54:59.829861 | orchestrator | Friday 29 August 2025 14:53:24 +0000 (0:00:00.916) 0:02:12.535 ********* 2025-08-29 14:54:59.829867 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:59.829873 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:54:59.829879 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:54:59.829885 | orchestrator | 2025-08-29 14:54:59.829892 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-08-29 14:54:59.829898 | orchestrator | Friday 29 August 2025 14:53:24 +0000 (0:00:00.273) 0:02:12.808 ********* 2025-08-29 14:54:59.829904 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:59.829910 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:54:59.829916 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:54:59.829922 | orchestrator | 2025-08-29 14:54:59.829928 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-08-29 14:54:59.829934 | orchestrator | Friday 29 August 2025 14:53:24 +0000 (0:00:00.316) 0:02:13.125 ********* 2025-08-29 14:54:59.829941 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:54:59.829947 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:54:59.829953 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:54:59.829959 | orchestrator | 2025-08-29 14:54:59.829965 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-08-29 14:54:59.829971 | orchestrator | Friday 29 August 2025 14:53:25 +0000 (0:00:00.872) 0:02:13.997 ********* 2025-08-29 14:54:59.829977 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:54:59.829987 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:54:59.829993 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:54:59.829999 | orchestrator | 2025-08-29 14:54:59.830005 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-08-29 14:54:59.830124 | orchestrator | Friday 29 August 2025 14:53:26 +0000 (0:00:00.627) 0:02:14.625 ********* 2025-08-29 14:54:59.830136 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-08-29 14:54:59.830142 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-08-29 14:54:59.830180 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-08-29 14:54:59.830187 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-08-29 14:54:59.830193 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-08-29 14:54:59.830199 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-08-29 14:54:59.830205 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-08-29 14:54:59.830212 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-08-29 14:54:59.830218 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-08-29 14:54:59.830224 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-08-29 14:54:59.830230 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-08-29 14:54:59.830242 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-08-29 14:54:59.830248 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-08-29 14:54:59.830260 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-08-29 14:54:59.830267 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-08-29 14:54:59.830273 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-08-29 14:54:59.830279 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-08-29 14:54:59.830285 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-08-29 14:54:59.830291 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-08-29 14:54:59.830298 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-08-29 14:54:59.830304 | orchestrator | 2025-08-29 14:54:59.830310 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-08-29 14:54:59.830316 | orchestrator | 2025-08-29 14:54:59.830323 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-08-29 14:54:59.830329 | orchestrator | Friday 29 August 2025 14:53:29 +0000 (0:00:03.218) 0:02:17.844 ********* 2025-08-29 14:54:59.830335 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:54:59.830341 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:54:59.830347 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:54:59.830353 | orchestrator | 2025-08-29 14:54:59.830360 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-08-29 14:54:59.830366 | orchestrator | Friday 29 August 2025 14:53:30 +0000 (0:00:00.454) 0:02:18.298 ********* 2025-08-29 14:54:59.830372 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:54:59.830378 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:54:59.830384 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:54:59.830390 | orchestrator | 2025-08-29 14:54:59.830396 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-08-29 14:54:59.830403 | orchestrator | Friday 29 August 2025 14:53:31 +0000 (0:00:01.572) 0:02:19.871 ********* 2025-08-29 14:54:59.830409 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:54:59.830415 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:54:59.830421 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:54:59.830427 | orchestrator | 2025-08-29 14:54:59.830433 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-08-29 14:54:59.830439 | orchestrator | Friday 29 August 2025 14:53:32 +0000 (0:00:00.360) 0:02:20.231 ********* 2025-08-29 14:54:59.830446 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 14:54:59.830452 | orchestrator | 2025-08-29 14:54:59.830458 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-08-29 14:54:59.830465 | orchestrator | Friday 29 August 2025 14:53:32 +0000 (0:00:00.697) 0:02:20.928 ********* 2025-08-29 14:54:59.830471 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:54:59.830477 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:54:59.830483 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:54:59.830489 | orchestrator | 2025-08-29 14:54:59.830495 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-08-29 14:54:59.830502 | orchestrator | Friday 29 August 2025 14:53:33 +0000 (0:00:00.288) 0:02:21.216 ********* 2025-08-29 14:54:59.830508 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:54:59.830514 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:54:59.830520 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:54:59.830526 | orchestrator | 2025-08-29 14:54:59.830532 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-08-29 14:54:59.830550 | orchestrator | Friday 29 August 2025 14:53:33 +0000 (0:00:00.373) 0:02:21.590 ********* 2025-08-29 14:54:59.830556 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:54:59.830562 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:54:59.830569 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:54:59.830575 | orchestrator | 2025-08-29 14:54:59.830581 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2025-08-29 14:54:59.830587 | orchestrator | Friday 29 August 2025 14:53:33 +0000 (0:00:00.296) 0:02:21.887 ********* 2025-08-29 14:54:59.830593 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:54:59.830599 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:54:59.830606 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:54:59.830612 | orchestrator | 2025-08-29 14:54:59.830618 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2025-08-29 14:54:59.830624 | orchestrator | Friday 29 August 2025 14:53:34 +0000 (0:00:00.813) 0:02:22.700 ********* 2025-08-29 14:54:59.830630 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:54:59.830636 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:54:59.830642 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:54:59.830648 | orchestrator | 2025-08-29 14:54:59.830655 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-08-29 14:54:59.830661 | orchestrator | Friday 29 August 2025 14:53:35 +0000 (0:00:01.221) 0:02:23.921 ********* 2025-08-29 14:54:59.830667 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:54:59.830673 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:54:59.830680 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:54:59.830686 | orchestrator | 2025-08-29 14:54:59.830692 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-08-29 14:54:59.830698 | orchestrator | Friday 29 August 2025 14:53:37 +0000 (0:00:01.365) 0:02:25.287 ********* 2025-08-29 14:54:59.830704 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:54:59.830710 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:54:59.830717 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:54:59.830723 | orchestrator | 2025-08-29 14:54:59.830729 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-08-29 14:54:59.830735 | orchestrator | 2025-08-29 14:54:59.830741 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-08-29 14:54:59.830747 | orchestrator | Friday 29 August 2025 14:53:50 +0000 (0:00:12.989) 0:02:38.277 ********* 2025-08-29 14:54:59.830753 | orchestrator | ok: [testbed-manager] 2025-08-29 14:54:59.830760 | orchestrator | 2025-08-29 14:54:59.830766 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-08-29 14:54:59.830772 | orchestrator | Friday 29 August 2025 14:53:50 +0000 (0:00:00.790) 0:02:39.067 ********* 2025-08-29 14:54:59.830782 | orchestrator | changed: [testbed-manager] 2025-08-29 14:54:59.830788 | orchestrator | 2025-08-29 14:54:59.830794 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-08-29 14:54:59.830801 | orchestrator | Friday 29 August 2025 14:53:51 +0000 (0:00:00.438) 0:02:39.506 ********* 2025-08-29 14:54:59.830807 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-08-29 14:54:59.830813 | orchestrator | 2025-08-29 14:54:59.830819 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-08-29 14:54:59.830826 | orchestrator | Friday 29 August 2025 14:53:51 +0000 (0:00:00.565) 0:02:40.072 ********* 2025-08-29 14:54:59.830832 | orchestrator | changed: [testbed-manager] 2025-08-29 14:54:59.830838 | orchestrator | 2025-08-29 14:54:59.830844 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-08-29 14:54:59.830850 | orchestrator | Friday 29 August 2025 14:53:52 +0000 (0:00:00.883) 0:02:40.956 ********* 2025-08-29 14:54:59.830856 | orchestrator | changed: [testbed-manager] 2025-08-29 14:54:59.830863 | orchestrator | 2025-08-29 14:54:59.830869 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-08-29 14:54:59.830875 | orchestrator | Friday 29 August 2025 14:53:53 +0000 (0:00:00.664) 0:02:41.620 ********* 2025-08-29 14:54:59.830885 | orchestrator | changed: [testbed-manager -> localhost] 2025-08-29 14:54:59.830892 | orchestrator | 2025-08-29 14:54:59.830898 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-08-29 14:54:59.830904 | orchestrator | Friday 29 August 2025 14:53:55 +0000 (0:00:01.619) 0:02:43.239 ********* 2025-08-29 14:54:59.830910 | orchestrator | changed: [testbed-manager -> localhost] 2025-08-29 14:54:59.830916 | orchestrator | 2025-08-29 14:54:59.830922 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-08-29 14:54:59.830929 | orchestrator | Friday 29 August 2025 14:53:56 +0000 (0:00:01.094) 0:02:44.334 ********* 2025-08-29 14:54:59.830935 | orchestrator | changed: [testbed-manager] 2025-08-29 14:54:59.830941 | orchestrator | 2025-08-29 14:54:59.830947 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-08-29 14:54:59.830953 | orchestrator | Friday 29 August 2025 14:53:56 +0000 (0:00:00.458) 0:02:44.792 ********* 2025-08-29 14:54:59.830959 | orchestrator | changed: [testbed-manager] 2025-08-29 14:54:59.830965 | orchestrator | 2025-08-29 14:54:59.830971 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-08-29 14:54:59.830978 | orchestrator | 2025-08-29 14:54:59.830984 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-08-29 14:54:59.830990 | orchestrator | Friday 29 August 2025 14:53:57 +0000 (0:00:00.658) 0:02:45.451 ********* 2025-08-29 14:54:59.830996 | orchestrator | ok: [testbed-manager] 2025-08-29 14:54:59.831002 | orchestrator | 2025-08-29 14:54:59.831008 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-08-29 14:54:59.831014 | orchestrator | Friday 29 August 2025 14:53:57 +0000 (0:00:00.156) 0:02:45.608 ********* 2025-08-29 14:54:59.831021 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-08-29 14:54:59.831027 | orchestrator | 2025-08-29 14:54:59.831033 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-08-29 14:54:59.831039 | orchestrator | Friday 29 August 2025 14:53:57 +0000 (0:00:00.254) 0:02:45.863 ********* 2025-08-29 14:54:59.831045 | orchestrator | ok: [testbed-manager] 2025-08-29 14:54:59.831051 | orchestrator | 2025-08-29 14:54:59.831057 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-08-29 14:54:59.831063 | orchestrator | Friday 29 August 2025 14:53:58 +0000 (0:00:00.973) 0:02:46.836 ********* 2025-08-29 14:54:59.831073 | orchestrator | ok: [testbed-manager] 2025-08-29 14:54:59.831079 | orchestrator | 2025-08-29 14:54:59.831085 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-08-29 14:54:59.831092 | orchestrator | Friday 29 August 2025 14:54:00 +0000 (0:00:01.771) 0:02:48.607 ********* 2025-08-29 14:54:59.831098 | orchestrator | changed: [testbed-manager] 2025-08-29 14:54:59.831104 | orchestrator | 2025-08-29 14:54:59.831110 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-08-29 14:54:59.831116 | orchestrator | Friday 29 August 2025 14:54:01 +0000 (0:00:00.764) 0:02:49.371 ********* 2025-08-29 14:54:59.831122 | orchestrator | ok: [testbed-manager] 2025-08-29 14:54:59.831128 | orchestrator | 2025-08-29 14:54:59.831135 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-08-29 14:54:59.831141 | orchestrator | Friday 29 August 2025 14:54:01 +0000 (0:00:00.442) 0:02:49.813 ********* 2025-08-29 14:54:59.831160 | orchestrator | changed: [testbed-manager] 2025-08-29 14:54:59.831167 | orchestrator | 2025-08-29 14:54:59.831173 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-08-29 14:54:59.831179 | orchestrator | Friday 29 August 2025 14:54:09 +0000 (0:00:08.023) 0:02:57.837 ********* 2025-08-29 14:54:59.831185 | orchestrator | changed: [testbed-manager] 2025-08-29 14:54:59.831191 | orchestrator | 2025-08-29 14:54:59.831197 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-08-29 14:54:59.831203 | orchestrator | Friday 29 August 2025 14:54:23 +0000 (0:00:14.226) 0:03:12.063 ********* 2025-08-29 14:54:59.831209 | orchestrator | ok: [testbed-manager] 2025-08-29 14:54:59.831223 | orchestrator | 2025-08-29 14:54:59.831229 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-08-29 14:54:59.831236 | orchestrator | 2025-08-29 14:54:59.831242 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-08-29 14:54:59.831248 | orchestrator | Friday 29 August 2025 14:54:24 +0000 (0:00:00.648) 0:03:12.711 ********* 2025-08-29 14:54:59.831254 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:54:59.831260 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:54:59.831266 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:54:59.831272 | orchestrator | 2025-08-29 14:54:59.831279 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-08-29 14:54:59.831285 | orchestrator | Friday 29 August 2025 14:54:25 +0000 (0:00:00.557) 0:03:13.271 ********* 2025-08-29 14:54:59.831291 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:59.831297 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:54:59.831303 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:54:59.831309 | orchestrator | 2025-08-29 14:54:59.831319 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-08-29 14:54:59.831326 | orchestrator | Friday 29 August 2025 14:54:25 +0000 (0:00:00.362) 0:03:13.633 ********* 2025-08-29 14:54:59.831332 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:54:59.831338 | orchestrator | 2025-08-29 14:54:59.831344 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-08-29 14:54:59.831351 | orchestrator | Friday 29 August 2025 14:54:26 +0000 (0:00:00.860) 0:03:14.493 ********* 2025-08-29 14:54:59.831357 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:59.831363 | orchestrator | 2025-08-29 14:54:59.831369 | orchestrator | TASK [k3s_server_post : Check if Cilium CLI is installed] ********************** 2025-08-29 14:54:59.831375 | orchestrator | Friday 29 August 2025 14:54:26 +0000 (0:00:00.230) 0:03:14.724 ********* 2025-08-29 14:54:59.831381 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:59.831387 | orchestrator | 2025-08-29 14:54:59.831393 | orchestrator | TASK [k3s_server_post : Check for Cilium CLI version in command output] ******** 2025-08-29 14:54:59.831400 | orchestrator | Friday 29 August 2025 14:54:26 +0000 (0:00:00.276) 0:03:15.000 ********* 2025-08-29 14:54:59.831406 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:59.831412 | orchestrator | 2025-08-29 14:54:59.831418 | orchestrator | TASK [k3s_server_post : Get latest stable Cilium CLI version file] ************* 2025-08-29 14:54:59.831424 | orchestrator | Friday 29 August 2025 14:54:27 +0000 (0:00:00.249) 0:03:15.250 ********* 2025-08-29 14:54:59.831430 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:59.831436 | orchestrator | 2025-08-29 14:54:59.831442 | orchestrator | TASK [k3s_server_post : Read Cilium CLI stable version from file] ************** 2025-08-29 14:54:59.831448 | orchestrator | Friday 29 August 2025 14:54:27 +0000 (0:00:00.232) 0:03:15.483 ********* 2025-08-29 14:54:59.831455 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:59.831461 | orchestrator | 2025-08-29 14:54:59.831467 | orchestrator | TASK [k3s_server_post : Log installed Cilium CLI version] ********************** 2025-08-29 14:54:59.831473 | orchestrator | Friday 29 August 2025 14:54:27 +0000 (0:00:00.259) 0:03:15.742 ********* 2025-08-29 14:54:59.831479 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:59.831485 | orchestrator | 2025-08-29 14:54:59.831491 | orchestrator | TASK [k3s_server_post : Log latest stable Cilium CLI version] ****************** 2025-08-29 14:54:59.831497 | orchestrator | Friday 29 August 2025 14:54:27 +0000 (0:00:00.222) 0:03:15.964 ********* 2025-08-29 14:54:59.831503 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:59.831509 | orchestrator | 2025-08-29 14:54:59.831516 | orchestrator | TASK [k3s_server_post : Determine if Cilium CLI needs installation or update] *** 2025-08-29 14:54:59.831522 | orchestrator | Friday 29 August 2025 14:54:27 +0000 (0:00:00.195) 0:03:16.160 ********* 2025-08-29 14:54:59.831528 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:59.831534 | orchestrator | 2025-08-29 14:54:59.831540 | orchestrator | TASK [k3s_server_post : Set architecture variable] ***************************** 2025-08-29 14:54:59.831551 | orchestrator | Friday 29 August 2025 14:54:28 +0000 (0:00:00.189) 0:03:16.349 ********* 2025-08-29 14:54:59.831557 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:59.831563 | orchestrator | 2025-08-29 14:54:59.831569 | orchestrator | TASK [k3s_server_post : Download Cilium CLI and checksum] ********************** 2025-08-29 14:54:59.831575 | orchestrator | Friday 29 August 2025 14:54:28 +0000 (0:00:00.187) 0:03:16.536 ********* 2025-08-29 14:54:59.831581 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz)  2025-08-29 14:54:59.831587 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz.sha256sum)  2025-08-29 14:54:59.831594 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:59.831600 | orchestrator | 2025-08-29 14:54:59.831609 | orchestrator | TASK [k3s_server_post : Verify the downloaded tarball] ************************* 2025-08-29 14:54:59.831615 | orchestrator | Friday 29 August 2025 14:54:29 +0000 (0:00:00.825) 0:03:17.362 ********* 2025-08-29 14:54:59.831622 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:59.831628 | orchestrator | 2025-08-29 14:54:59.831634 | orchestrator | TASK [k3s_server_post : Extract Cilium CLI to /usr/local/bin] ****************** 2025-08-29 14:54:59.831640 | orchestrator | Friday 29 August 2025 14:54:29 +0000 (0:00:00.258) 0:03:17.620 ********* 2025-08-29 14:54:59.831646 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:59.831652 | orchestrator | 2025-08-29 14:54:59.831658 | orchestrator | TASK [k3s_server_post : Remove downloaded tarball and checksum file] *********** 2025-08-29 14:54:59.831665 | orchestrator | Friday 29 August 2025 14:54:29 +0000 (0:00:00.234) 0:03:17.855 ********* 2025-08-29 14:54:59.831671 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:59.831677 | orchestrator | 2025-08-29 14:54:59.831683 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-08-29 14:54:59.831689 | orchestrator | Friday 29 August 2025 14:54:29 +0000 (0:00:00.196) 0:03:18.052 ********* 2025-08-29 14:54:59.831695 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:59.831701 | orchestrator | 2025-08-29 14:54:59.831707 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-08-29 14:54:59.831713 | orchestrator | Friday 29 August 2025 14:54:30 +0000 (0:00:00.215) 0:03:18.268 ********* 2025-08-29 14:54:59.831719 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:59.831725 | orchestrator | 2025-08-29 14:54:59.831731 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-08-29 14:54:59.831738 | orchestrator | Friday 29 August 2025 14:54:30 +0000 (0:00:00.215) 0:03:18.483 ********* 2025-08-29 14:54:59.831744 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:59.831750 | orchestrator | 2025-08-29 14:54:59.831756 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-08-29 14:54:59.831762 | orchestrator | Friday 29 August 2025 14:54:30 +0000 (0:00:00.216) 0:03:18.700 ********* 2025-08-29 14:54:59.831768 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:59.831774 | orchestrator | 2025-08-29 14:54:59.831780 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-08-29 14:54:59.831786 | orchestrator | Friday 29 August 2025 14:54:30 +0000 (0:00:00.211) 0:03:18.912 ********* 2025-08-29 14:54:59.831793 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:59.831799 | orchestrator | 2025-08-29 14:54:59.831805 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-08-29 14:54:59.831814 | orchestrator | Friday 29 August 2025 14:54:30 +0000 (0:00:00.207) 0:03:19.119 ********* 2025-08-29 14:54:59.831821 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:59.831827 | orchestrator | 2025-08-29 14:54:59.831833 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-08-29 14:54:59.831840 | orchestrator | Friday 29 August 2025 14:54:31 +0000 (0:00:00.222) 0:03:19.342 ********* 2025-08-29 14:54:59.831846 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:59.831852 | orchestrator | 2025-08-29 14:54:59.831858 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-08-29 14:54:59.831864 | orchestrator | Friday 29 August 2025 14:54:31 +0000 (0:00:00.253) 0:03:19.596 ********* 2025-08-29 14:54:59.831903 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:59.831909 | orchestrator | 2025-08-29 14:54:59.831916 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-08-29 14:54:59.831922 | orchestrator | Friday 29 August 2025 14:54:31 +0000 (0:00:00.239) 0:03:19.835 ********* 2025-08-29 14:54:59.831928 | orchestrator | skipping: [testbed-node-0] => (item=deployment/cilium-operator)  2025-08-29 14:54:59.831935 | orchestrator | skipping: [testbed-node-0] => (item=daemonset/cilium)  2025-08-29 14:54:59.831941 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-relay)  2025-08-29 14:54:59.831947 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-ui)  2025-08-29 14:54:59.831953 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:59.831960 | orchestrator | 2025-08-29 14:54:59.831966 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-08-29 14:54:59.831972 | orchestrator | Friday 29 August 2025 14:54:32 +0000 (0:00:01.009) 0:03:20.845 ********* 2025-08-29 14:54:59.831978 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:59.831984 | orchestrator | 2025-08-29 14:54:59.831990 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-08-29 14:54:59.831997 | orchestrator | Friday 29 August 2025 14:54:32 +0000 (0:00:00.209) 0:03:21.055 ********* 2025-08-29 14:54:59.832003 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:59.832009 | orchestrator | 2025-08-29 14:54:59.832015 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-08-29 14:54:59.832021 | orchestrator | Friday 29 August 2025 14:54:33 +0000 (0:00:00.220) 0:03:21.275 ********* 2025-08-29 14:54:59.832028 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:59.832034 | orchestrator | 2025-08-29 14:54:59.832040 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-08-29 14:54:59.832046 | orchestrator | Friday 29 August 2025 14:54:33 +0000 (0:00:00.201) 0:03:21.477 ********* 2025-08-29 14:54:59.832052 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:59.832058 | orchestrator | 2025-08-29 14:54:59.832064 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-08-29 14:54:59.832070 | orchestrator | Friday 29 August 2025 14:54:33 +0000 (0:00:00.211) 0:03:21.688 ********* 2025-08-29 14:54:59.832077 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io)  2025-08-29 14:54:59.832083 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io)  2025-08-29 14:54:59.832089 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:59.832095 | orchestrator | 2025-08-29 14:54:59.832101 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-08-29 14:54:59.832107 | orchestrator | Friday 29 August 2025 14:54:33 +0000 (0:00:00.287) 0:03:21.976 ********* 2025-08-29 14:54:59.832114 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:59.832120 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:54:59.832130 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:54:59.832136 | orchestrator | 2025-08-29 14:54:59.832142 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-08-29 14:54:59.832161 | orchestrator | Friday 29 August 2025 14:54:34 +0000 (0:00:00.348) 0:03:22.325 ********* 2025-08-29 14:54:59.832168 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:54:59.832174 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:54:59.832180 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:54:59.832186 | orchestrator | 2025-08-29 14:54:59.832193 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-08-29 14:54:59.832199 | orchestrator | 2025-08-29 14:54:59.832205 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-08-29 14:54:59.832211 | orchestrator | Friday 29 August 2025 14:54:35 +0000 (0:00:01.210) 0:03:23.536 ********* 2025-08-29 14:54:59.832218 | orchestrator | ok: [testbed-manager] 2025-08-29 14:54:59.832224 | orchestrator | 2025-08-29 14:54:59.832230 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-08-29 14:54:59.832241 | orchestrator | Friday 29 August 2025 14:54:35 +0000 (0:00:00.178) 0:03:23.714 ********* 2025-08-29 14:54:59.832247 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-08-29 14:54:59.832253 | orchestrator | 2025-08-29 14:54:59.832260 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-08-29 14:54:59.832266 | orchestrator | Friday 29 August 2025 14:54:35 +0000 (0:00:00.209) 0:03:23.924 ********* 2025-08-29 14:54:59.832272 | orchestrator | changed: [testbed-manager] 2025-08-29 14:54:59.832278 | orchestrator | 2025-08-29 14:54:59.832284 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-08-29 14:54:59.832291 | orchestrator | 2025-08-29 14:54:59.832297 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-08-29 14:54:59.832303 | orchestrator | Friday 29 August 2025 14:54:41 +0000 (0:00:05.684) 0:03:29.609 ********* 2025-08-29 14:54:59.832309 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:54:59.832315 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:54:59.832321 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:54:59.832328 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:54:59.832334 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:54:59.832340 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:54:59.832346 | orchestrator | 2025-08-29 14:54:59.832352 | orchestrator | TASK [Manage labels] *********************************************************** 2025-08-29 14:54:59.832358 | orchestrator | Friday 29 August 2025 14:54:42 +0000 (0:00:00.899) 0:03:30.508 ********* 2025-08-29 14:54:59.832369 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-08-29 14:54:59.832375 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-08-29 14:54:59.832382 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-08-29 14:54:59.832388 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-08-29 14:54:59.832394 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-08-29 14:54:59.832401 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-08-29 14:54:59.832407 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-08-29 14:54:59.832413 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-08-29 14:54:59.832420 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-08-29 14:54:59.832426 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-08-29 14:54:59.832432 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-08-29 14:54:59.832438 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-08-29 14:54:59.832444 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-08-29 14:54:59.832451 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-08-29 14:54:59.832457 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-08-29 14:54:59.832463 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-08-29 14:54:59.832469 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-08-29 14:54:59.832475 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-08-29 14:54:59.832482 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-08-29 14:54:59.832488 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-08-29 14:54:59.832494 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-08-29 14:54:59.832507 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-08-29 14:54:59.832514 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-08-29 14:54:59.832520 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-08-29 14:54:59.832526 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-08-29 14:54:59.832532 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-08-29 14:54:59.832542 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-08-29 14:54:59.832548 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-08-29 14:54:59.832555 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-08-29 14:54:59.832561 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-08-29 14:54:59.832567 | orchestrator | 2025-08-29 14:54:59.832573 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-08-29 14:54:59.832579 | orchestrator | Friday 29 August 2025 14:54:55 +0000 (0:00:13.444) 0:03:43.953 ********* 2025-08-29 14:54:59.832586 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:54:59.832592 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:54:59.832598 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:54:59.832604 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:59.832611 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:54:59.832617 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:54:59.832623 | orchestrator | 2025-08-29 14:54:59.832629 | orchestrator | TASK [Manage taints] *********************************************************** 2025-08-29 14:54:59.832635 | orchestrator | Friday 29 August 2025 14:54:56 +0000 (0:00:00.709) 0:03:44.663 ********* 2025-08-29 14:54:59.832642 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:54:59.832648 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:54:59.832654 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:54:59.832660 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:54:59.832666 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:54:59.832672 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:54:59.832678 | orchestrator | 2025-08-29 14:54:59.832703 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:54:59.832710 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:54:59.832718 | orchestrator | testbed-node-0 : ok=42  changed=20  unreachable=0 failed=0 skipped=45  rescued=0 ignored=0 2025-08-29 14:54:59.832725 | orchestrator | testbed-node-1 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-08-29 14:54:59.832735 | orchestrator | testbed-node-2 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-08-29 14:54:59.832742 | orchestrator | testbed-node-3 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-08-29 14:54:59.832748 | orchestrator | testbed-node-4 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-08-29 14:54:59.832754 | orchestrator | testbed-node-5 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-08-29 14:54:59.832761 | orchestrator | 2025-08-29 14:54:59.832767 | orchestrator | 2025-08-29 14:54:59.832773 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:54:59.832780 | orchestrator | Friday 29 August 2025 14:54:57 +0000 (0:00:00.554) 0:03:45.217 ********* 2025-08-29 14:54:59.832791 | orchestrator | =============================================================================== 2025-08-29 14:54:59.832797 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 44.70s 2025-08-29 14:54:59.832804 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 25.74s 2025-08-29 14:54:59.832810 | orchestrator | kubectl : Install required packages ------------------------------------ 14.23s 2025-08-29 14:54:59.832816 | orchestrator | Manage labels ---------------------------------------------------------- 13.44s 2025-08-29 14:54:59.832822 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 12.99s 2025-08-29 14:54:59.832828 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 8.02s 2025-08-29 14:54:59.832835 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.68s 2025-08-29 14:54:59.832841 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.56s 2025-08-29 14:54:59.832847 | orchestrator | k3s_custom_registries : Create directory /etc/rancher/k3s --------------- 3.74s 2025-08-29 14:54:59.832853 | orchestrator | k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml --- 3.31s 2025-08-29 14:54:59.832859 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.22s 2025-08-29 14:54:59.832865 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 3.20s 2025-08-29 14:54:59.832872 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 2.87s 2025-08-29 14:54:59.832878 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.25s 2025-08-29 14:54:59.832884 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.16s 2025-08-29 14:54:59.832890 | orchestrator | k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry --- 1.93s 2025-08-29 14:54:59.832896 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 1.92s 2025-08-29 14:54:59.832902 | orchestrator | k3s_server : Stop k3s-init ---------------------------------------------- 1.91s 2025-08-29 14:54:59.832912 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 1.86s 2025-08-29 14:54:59.832918 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 1.77s 2025-08-29 14:54:59.832924 | orchestrator | 2025-08-29 14:54:59 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:54:59.832931 | orchestrator | 2025-08-29 14:54:59 | INFO  | Task c7dd3c80-30cb-48a6-95ab-20087ca8ed2a is in state STARTED 2025-08-29 14:54:59.832985 | orchestrator | 2025-08-29 14:54:59 | INFO  | Task 94fc4823-b260-4d39-89f7-fbbef41a8b85 is in state STARTED 2025-08-29 14:54:59.832994 | orchestrator | 2025-08-29 14:54:59 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:54:59.834239 | orchestrator | 2025-08-29 14:54:59 | INFO  | Task 61591746-7064-4c4e-b5f7-fa9566116e97 is in state STARTED 2025-08-29 14:54:59.834312 | orchestrator | 2025-08-29 14:54:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:02.864963 | orchestrator | 2025-08-29 14:55:02 | INFO  | Task fce05841-e576-462c-a723-efbbff8d10ed is in state STARTED 2025-08-29 14:55:02.866300 | orchestrator | 2025-08-29 14:55:02 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:55:02.866948 | orchestrator | 2025-08-29 14:55:02 | INFO  | Task c7dd3c80-30cb-48a6-95ab-20087ca8ed2a is in state STARTED 2025-08-29 14:55:02.867713 | orchestrator | 2025-08-29 14:55:02 | INFO  | Task 94fc4823-b260-4d39-89f7-fbbef41a8b85 is in state STARTED 2025-08-29 14:55:02.868605 | orchestrator | 2025-08-29 14:55:02 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:55:02.869461 | orchestrator | 2025-08-29 14:55:02 | INFO  | Task 61591746-7064-4c4e-b5f7-fa9566116e97 is in state STARTED 2025-08-29 14:55:02.869508 | orchestrator | 2025-08-29 14:55:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:05.912767 | orchestrator | 2025-08-29 14:55:05 | INFO  | Task fce05841-e576-462c-a723-efbbff8d10ed is in state STARTED 2025-08-29 14:55:05.912841 | orchestrator | 2025-08-29 14:55:05 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:55:05.916622 | orchestrator | 2025-08-29 14:55:05 | INFO  | Task c7dd3c80-30cb-48a6-95ab-20087ca8ed2a is in state STARTED 2025-08-29 14:55:05.919295 | orchestrator | 2025-08-29 14:55:05 | INFO  | Task 94fc4823-b260-4d39-89f7-fbbef41a8b85 is in state STARTED 2025-08-29 14:55:05.923053 | orchestrator | 2025-08-29 14:55:05 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:55:05.923131 | orchestrator | 2025-08-29 14:55:05 | INFO  | Task 61591746-7064-4c4e-b5f7-fa9566116e97 is in state SUCCESS 2025-08-29 14:55:05.923169 | orchestrator | 2025-08-29 14:55:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:08.965002 | orchestrator | 2025-08-29 14:55:08 | INFO  | Task fce05841-e576-462c-a723-efbbff8d10ed is in state STARTED 2025-08-29 14:55:08.965237 | orchestrator | 2025-08-29 14:55:08 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:55:08.966960 | orchestrator | 2025-08-29 14:55:08 | INFO  | Task c7dd3c80-30cb-48a6-95ab-20087ca8ed2a is in state STARTED 2025-08-29 14:55:08.968099 | orchestrator | 2025-08-29 14:55:08 | INFO  | Task 94fc4823-b260-4d39-89f7-fbbef41a8b85 is in state STARTED 2025-08-29 14:55:08.970014 | orchestrator | 2025-08-29 14:55:08 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:55:08.971527 | orchestrator | 2025-08-29 14:55:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:12.011173 | orchestrator | 2025-08-29 14:55:12 | INFO  | Task fce05841-e576-462c-a723-efbbff8d10ed is in state SUCCESS 2025-08-29 14:55:12.012177 | orchestrator | 2025-08-29 14:55:12 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:55:12.013469 | orchestrator | 2025-08-29 14:55:12 | INFO  | Task c7dd3c80-30cb-48a6-95ab-20087ca8ed2a is in state STARTED 2025-08-29 14:55:12.014503 | orchestrator | 2025-08-29 14:55:12 | INFO  | Task 94fc4823-b260-4d39-89f7-fbbef41a8b85 is in state STARTED 2025-08-29 14:55:12.015284 | orchestrator | 2025-08-29 14:55:12 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:55:12.015309 | orchestrator | 2025-08-29 14:55:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:15.090732 | orchestrator | 2025-08-29 14:55:15 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:55:15.094920 | orchestrator | 2025-08-29 14:55:15 | INFO  | Task c7dd3c80-30cb-48a6-95ab-20087ca8ed2a is in state STARTED 2025-08-29 14:55:15.095983 | orchestrator | 2025-08-29 14:55:15 | INFO  | Task 94fc4823-b260-4d39-89f7-fbbef41a8b85 is in state STARTED 2025-08-29 14:55:15.097625 | orchestrator | 2025-08-29 14:55:15 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:55:15.097657 | orchestrator | 2025-08-29 14:55:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:18.152931 | orchestrator | 2025-08-29 14:55:18 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:55:18.153700 | orchestrator | 2025-08-29 14:55:18 | INFO  | Task c7dd3c80-30cb-48a6-95ab-20087ca8ed2a is in state STARTED 2025-08-29 14:55:18.154678 | orchestrator | 2025-08-29 14:55:18 | INFO  | Task 94fc4823-b260-4d39-89f7-fbbef41a8b85 is in state STARTED 2025-08-29 14:55:18.155977 | orchestrator | 2025-08-29 14:55:18 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:55:18.156043 | orchestrator | 2025-08-29 14:55:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:21.250768 | orchestrator | 2025-08-29 14:55:21 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:55:21.251011 | orchestrator | 2025-08-29 14:55:21 | INFO  | Task c7dd3c80-30cb-48a6-95ab-20087ca8ed2a is in state STARTED 2025-08-29 14:55:21.251985 | orchestrator | 2025-08-29 14:55:21 | INFO  | Task 94fc4823-b260-4d39-89f7-fbbef41a8b85 is in state STARTED 2025-08-29 14:55:21.252867 | orchestrator | 2025-08-29 14:55:21 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:55:21.252904 | orchestrator | 2025-08-29 14:55:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:24.294590 | orchestrator | 2025-08-29 14:55:24 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:55:24.296040 | orchestrator | 2025-08-29 14:55:24 | INFO  | Task c7dd3c80-30cb-48a6-95ab-20087ca8ed2a is in state STARTED 2025-08-29 14:55:24.297660 | orchestrator | 2025-08-29 14:55:24 | INFO  | Task 94fc4823-b260-4d39-89f7-fbbef41a8b85 is in state STARTED 2025-08-29 14:55:24.299438 | orchestrator | 2025-08-29 14:55:24 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:55:24.299499 | orchestrator | 2025-08-29 14:55:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:27.338283 | orchestrator | 2025-08-29 14:55:27 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:55:27.338509 | orchestrator | 2025-08-29 14:55:27 | INFO  | Task c7dd3c80-30cb-48a6-95ab-20087ca8ed2a is in state STARTED 2025-08-29 14:55:27.340080 | orchestrator | 2025-08-29 14:55:27 | INFO  | Task 94fc4823-b260-4d39-89f7-fbbef41a8b85 is in state SUCCESS 2025-08-29 14:55:27.342210 | orchestrator | 2025-08-29 14:55:27.342298 | orchestrator | 2025-08-29 14:55:27.342320 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-08-29 14:55:27.342340 | orchestrator | 2025-08-29 14:55:27.342358 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-08-29 14:55:27.342374 | orchestrator | Friday 29 August 2025 14:55:01 +0000 (0:00:00.216) 0:00:00.216 ********* 2025-08-29 14:55:27.342392 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-08-29 14:55:27.342409 | orchestrator | 2025-08-29 14:55:27.342426 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-08-29 14:55:27.342442 | orchestrator | Friday 29 August 2025 14:55:02 +0000 (0:00:00.944) 0:00:01.161 ********* 2025-08-29 14:55:27.342459 | orchestrator | changed: [testbed-manager] 2025-08-29 14:55:27.342475 | orchestrator | 2025-08-29 14:55:27.342491 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-08-29 14:55:27.342509 | orchestrator | Friday 29 August 2025 14:55:03 +0000 (0:00:01.294) 0:00:02.455 ********* 2025-08-29 14:55:27.342526 | orchestrator | changed: [testbed-manager] 2025-08-29 14:55:27.342544 | orchestrator | 2025-08-29 14:55:27.342562 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:55:27.342581 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:55:27.342600 | orchestrator | 2025-08-29 14:55:27.342617 | orchestrator | 2025-08-29 14:55:27.342635 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:55:27.342652 | orchestrator | Friday 29 August 2025 14:55:04 +0000 (0:00:00.400) 0:00:02.856 ********* 2025-08-29 14:55:27.342669 | orchestrator | =============================================================================== 2025-08-29 14:55:27.342720 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.29s 2025-08-29 14:55:27.342738 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.94s 2025-08-29 14:55:27.342756 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.40s 2025-08-29 14:55:27.342774 | orchestrator | 2025-08-29 14:55:27.342792 | orchestrator | 2025-08-29 14:55:27.342810 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-08-29 14:55:27.342827 | orchestrator | 2025-08-29 14:55:27.342846 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-08-29 14:55:27.342881 | orchestrator | Friday 29 August 2025 14:55:02 +0000 (0:00:00.177) 0:00:00.177 ********* 2025-08-29 14:55:27.342901 | orchestrator | ok: [testbed-manager] 2025-08-29 14:55:27.342919 | orchestrator | 2025-08-29 14:55:27.342937 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-08-29 14:55:27.342956 | orchestrator | Friday 29 August 2025 14:55:02 +0000 (0:00:00.589) 0:00:00.767 ********* 2025-08-29 14:55:27.342973 | orchestrator | ok: [testbed-manager] 2025-08-29 14:55:27.342989 | orchestrator | 2025-08-29 14:55:27.343005 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-08-29 14:55:27.343021 | orchestrator | Friday 29 August 2025 14:55:03 +0000 (0:00:00.799) 0:00:01.566 ********* 2025-08-29 14:55:27.343036 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-08-29 14:55:27.343053 | orchestrator | 2025-08-29 14:55:27.343069 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-08-29 14:55:27.343085 | orchestrator | Friday 29 August 2025 14:55:04 +0000 (0:00:00.842) 0:00:02.409 ********* 2025-08-29 14:55:27.343101 | orchestrator | changed: [testbed-manager] 2025-08-29 14:55:27.343119 | orchestrator | 2025-08-29 14:55:27.343185 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-08-29 14:55:27.343202 | orchestrator | Friday 29 August 2025 14:55:05 +0000 (0:00:01.104) 0:00:03.513 ********* 2025-08-29 14:55:27.343218 | orchestrator | changed: [testbed-manager] 2025-08-29 14:55:27.343234 | orchestrator | 2025-08-29 14:55:27.343250 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-08-29 14:55:27.343267 | orchestrator | Friday 29 August 2025 14:55:06 +0000 (0:00:00.931) 0:00:04.444 ********* 2025-08-29 14:55:27.343283 | orchestrator | changed: [testbed-manager -> localhost] 2025-08-29 14:55:27.343300 | orchestrator | 2025-08-29 14:55:27.343316 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-08-29 14:55:27.343333 | orchestrator | Friday 29 August 2025 14:55:08 +0000 (0:00:01.704) 0:00:06.148 ********* 2025-08-29 14:55:27.343350 | orchestrator | changed: [testbed-manager -> localhost] 2025-08-29 14:55:27.343367 | orchestrator | 2025-08-29 14:55:27.343383 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-08-29 14:55:27.343401 | orchestrator | Friday 29 August 2025 14:55:09 +0000 (0:00:01.013) 0:00:07.161 ********* 2025-08-29 14:55:27.343418 | orchestrator | ok: [testbed-manager] 2025-08-29 14:55:27.343435 | orchestrator | 2025-08-29 14:55:27.343453 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-08-29 14:55:27.343471 | orchestrator | Friday 29 August 2025 14:55:09 +0000 (0:00:00.552) 0:00:07.714 ********* 2025-08-29 14:55:27.343487 | orchestrator | ok: [testbed-manager] 2025-08-29 14:55:27.343503 | orchestrator | 2025-08-29 14:55:27.343519 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:55:27.343535 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:55:27.343552 | orchestrator | 2025-08-29 14:55:27.343567 | orchestrator | 2025-08-29 14:55:27.343584 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:55:27.343600 | orchestrator | Friday 29 August 2025 14:55:10 +0000 (0:00:00.377) 0:00:08.092 ********* 2025-08-29 14:55:27.343617 | orchestrator | =============================================================================== 2025-08-29 14:55:27.343649 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.70s 2025-08-29 14:55:27.343666 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.10s 2025-08-29 14:55:27.343683 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 1.01s 2025-08-29 14:55:27.343722 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.93s 2025-08-29 14:55:27.343741 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.84s 2025-08-29 14:55:27.343757 | orchestrator | Create .kube directory -------------------------------------------------- 0.80s 2025-08-29 14:55:27.343774 | orchestrator | Get home directory of operator user ------------------------------------- 0.59s 2025-08-29 14:55:27.343790 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.55s 2025-08-29 14:55:27.343806 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.38s 2025-08-29 14:55:27.343823 | orchestrator | 2025-08-29 14:55:27.343840 | orchestrator | 2025-08-29 14:55:27.343858 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 14:55:27.343874 | orchestrator | 2025-08-29 14:55:27.343891 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 14:55:27.343906 | orchestrator | Friday 29 August 2025 14:54:12 +0000 (0:00:00.393) 0:00:00.393 ********* 2025-08-29 14:55:27.343923 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:55:27.343940 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:55:27.343957 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:55:27.343974 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:55:27.343989 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:55:27.344007 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:55:27.344023 | orchestrator | 2025-08-29 14:55:27.344040 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 14:55:27.344057 | orchestrator | Friday 29 August 2025 14:54:13 +0000 (0:00:01.052) 0:00:01.445 ********* 2025-08-29 14:55:27.344074 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-08-29 14:55:27.344092 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-08-29 14:55:27.344109 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-08-29 14:55:27.344126 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-08-29 14:55:27.344167 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-08-29 14:55:27.344183 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-08-29 14:55:27.344199 | orchestrator | 2025-08-29 14:55:27.344216 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-08-29 14:55:27.344232 | orchestrator | 2025-08-29 14:55:27.344248 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-08-29 14:55:27.344263 | orchestrator | Friday 29 August 2025 14:54:14 +0000 (0:00:00.588) 0:00:02.033 ********* 2025-08-29 14:55:27.344281 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:55:27.344300 | orchestrator | 2025-08-29 14:55:27.344315 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-08-29 14:55:27.344331 | orchestrator | Friday 29 August 2025 14:54:15 +0000 (0:00:01.309) 0:00:03.343 ********* 2025-08-29 14:55:27.344345 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-08-29 14:55:27.344362 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-08-29 14:55:27.344378 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-08-29 14:55:27.344394 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-08-29 14:55:27.344410 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-08-29 14:55:27.344425 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-08-29 14:55:27.344454 | orchestrator | 2025-08-29 14:55:27.344471 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-08-29 14:55:27.344487 | orchestrator | Friday 29 August 2025 14:54:16 +0000 (0:00:01.351) 0:00:04.695 ********* 2025-08-29 14:55:27.344504 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-08-29 14:55:27.344520 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-08-29 14:55:27.345366 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-08-29 14:55:27.345416 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-08-29 14:55:27.345432 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-08-29 14:55:27.345447 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-08-29 14:55:27.345463 | orchestrator | 2025-08-29 14:55:27.345479 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-08-29 14:55:27.345495 | orchestrator | Friday 29 August 2025 14:54:18 +0000 (0:00:01.625) 0:00:06.320 ********* 2025-08-29 14:55:27.345512 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-08-29 14:55:27.345527 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:55:27.345544 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-08-29 14:55:27.345560 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-08-29 14:55:27.345576 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:55:27.345592 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-08-29 14:55:27.345608 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:55:27.345624 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-08-29 14:55:27.345639 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:55:27.345655 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:55:27.345670 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-08-29 14:55:27.345686 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:55:27.345701 | orchestrator | 2025-08-29 14:55:27.345717 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-08-29 14:55:27.345732 | orchestrator | Friday 29 August 2025 14:54:19 +0000 (0:00:01.195) 0:00:07.515 ********* 2025-08-29 14:55:27.345748 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:55:27.345764 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:55:27.345780 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:55:27.345815 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:55:27.345831 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:55:27.345847 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:55:27.345861 | orchestrator | 2025-08-29 14:55:27.345877 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-08-29 14:55:27.345892 | orchestrator | Friday 29 August 2025 14:54:20 +0000 (0:00:00.826) 0:00:08.342 ********* 2025-08-29 14:55:27.345918 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:55:27.345943 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:55:27.345977 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:55:27.345994 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:55:27.346010 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:55:27.346229 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:55:27.346256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:55:27.346272 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:55:27.346301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:55:27.346319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:55:27.346336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:55:27.346363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:55:27.346380 | orchestrator | 2025-08-29 14:55:27.346396 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-08-29 14:55:27.346418 | orchestrator | Friday 29 August 2025 14:54:22 +0000 (0:00:02.096) 0:00:10.439 ********* 2025-08-29 14:55:27.346435 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:55:27.346462 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:55:27.346479 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:55:27.346496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:55:27.346521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:55:27.346545 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:55:27.346562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:55:27.346589 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:55:27.346608 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:55:27.346623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:55:27.346649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:55:27.346674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:55:27.346701 | orchestrator | 2025-08-29 14:55:27.346718 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-08-29 14:55:27.346734 | orchestrator | Friday 29 August 2025 14:54:26 +0000 (0:00:03.695) 0:00:14.135 ********* 2025-08-29 14:55:27.346751 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:55:27.346768 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:55:27.346784 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:55:27.346800 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:55:27.346816 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:55:27.346832 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:55:27.346848 | orchestrator | 2025-08-29 14:55:27.346864 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-08-29 14:55:27.346879 | orchestrator | Friday 29 August 2025 14:54:28 +0000 (0:00:02.286) 0:00:16.422 ********* 2025-08-29 14:55:27.346895 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:55:27.346913 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:55:27.346930 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:55:27.346957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:55:27.346982 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:55:27.347014 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:55:27.347031 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:55:27.347049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-08-29 14:55:27.347066 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:55:27.347093 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:55:27.347155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:55:27.347175 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-08-29 14:55:27.347191 | orchestrator | 2025-08-29 14:55:27.347208 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-08-29 14:55:27.347224 | orchestrator | Friday 29 August 2025 14:54:31 +0000 (0:00:02.996) 0:00:19.418 ********* 2025-08-29 14:55:27.347241 | orchestrator | 2025-08-29 14:55:27.347257 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-08-29 14:55:27.347273 | orchestrator | Friday 29 August 2025 14:54:32 +0000 (0:00:00.643) 0:00:20.061 ********* 2025-08-29 14:55:27.347289 | orchestrator | 2025-08-29 14:55:27.347305 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-08-29 14:55:27.347321 | orchestrator | Friday 29 August 2025 14:54:32 +0000 (0:00:00.243) 0:00:20.305 ********* 2025-08-29 14:55:27.347337 | orchestrator | 2025-08-29 14:55:27.347353 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-08-29 14:55:27.347368 | orchestrator | Friday 29 August 2025 14:54:32 +0000 (0:00:00.344) 0:00:20.650 ********* 2025-08-29 14:55:27.347384 | orchestrator | 2025-08-29 14:55:27.347400 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-08-29 14:55:27.347415 | orchestrator | Friday 29 August 2025 14:54:33 +0000 (0:00:00.296) 0:00:20.946 ********* 2025-08-29 14:55:27.347430 | orchestrator | 2025-08-29 14:55:27.347446 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-08-29 14:55:27.347462 | orchestrator | Friday 29 August 2025 14:54:33 +0000 (0:00:00.338) 0:00:21.285 ********* 2025-08-29 14:55:27.347478 | orchestrator | 2025-08-29 14:55:27.347493 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-08-29 14:55:27.347509 | orchestrator | Friday 29 August 2025 14:54:33 +0000 (0:00:00.467) 0:00:21.752 ********* 2025-08-29 14:55:27.347525 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:55:27.347541 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:55:27.347557 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:55:27.347573 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:55:27.347589 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:55:27.347604 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:55:27.347620 | orchestrator | 2025-08-29 14:55:27.347636 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-08-29 14:55:27.347653 | orchestrator | Friday 29 August 2025 14:54:46 +0000 (0:00:12.787) 0:00:34.540 ********* 2025-08-29 14:55:27.347668 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:55:27.347695 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:55:27.347710 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:55:27.347726 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:55:27.347741 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:55:27.347757 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:55:27.347772 | orchestrator | 2025-08-29 14:55:27.347788 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-08-29 14:55:27.347803 | orchestrator | Friday 29 August 2025 14:54:48 +0000 (0:00:01.426) 0:00:35.967 ********* 2025-08-29 14:55:27.347820 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:55:27.347836 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:55:27.347852 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:55:27.347868 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:55:27.347884 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:55:27.347899 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:55:27.347916 | orchestrator | 2025-08-29 14:55:27.347932 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-08-29 14:55:27.347948 | orchestrator | Friday 29 August 2025 14:55:00 +0000 (0:00:12.425) 0:00:48.393 ********* 2025-08-29 14:55:27.347964 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-08-29 14:55:27.347992 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-08-29 14:55:27.348010 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-08-29 14:55:27.348027 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-08-29 14:55:27.348043 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-08-29 14:55:27.348060 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-08-29 14:55:27.348085 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-08-29 14:55:27.348101 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-08-29 14:55:27.348117 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-08-29 14:55:27.348160 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-08-29 14:55:27.348177 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-08-29 14:55:27.348193 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-08-29 14:55:27.348208 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-08-29 14:55:27.348224 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-08-29 14:55:27.348240 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-08-29 14:55:27.348256 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-08-29 14:55:27.348273 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-08-29 14:55:27.348289 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-08-29 14:55:27.348306 | orchestrator | 2025-08-29 14:55:27.348322 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-08-29 14:55:27.348340 | orchestrator | Friday 29 August 2025 14:55:10 +0000 (0:00:09.573) 0:00:57.966 ********* 2025-08-29 14:55:27.348367 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-08-29 14:55:27.348384 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:55:27.348400 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-08-29 14:55:27.348418 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:55:27.348435 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-08-29 14:55:27.348452 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:55:27.348469 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-08-29 14:55:27.348485 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-08-29 14:55:27.348502 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-08-29 14:55:27.348517 | orchestrator | 2025-08-29 14:55:27.348534 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-08-29 14:55:27.348550 | orchestrator | Friday 29 August 2025 14:55:13 +0000 (0:00:03.403) 0:01:01.370 ********* 2025-08-29 14:55:27.348567 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-08-29 14:55:27.348584 | orchestrator | skipping: [testbed-node-3] 2025-08-29 14:55:27.348601 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-08-29 14:55:27.348617 | orchestrator | skipping: [testbed-node-4] 2025-08-29 14:55:27.348633 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-08-29 14:55:27.348649 | orchestrator | skipping: [testbed-node-5] 2025-08-29 14:55:27.348665 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-08-29 14:55:27.348682 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-08-29 14:55:27.348698 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-08-29 14:55:27.348715 | orchestrator | 2025-08-29 14:55:27.348733 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-08-29 14:55:27.348749 | orchestrator | Friday 29 August 2025 14:55:17 +0000 (0:00:03.774) 0:01:05.144 ********* 2025-08-29 14:55:27.348765 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:55:27.348781 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:55:27.348797 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:55:27.348813 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:55:27.348829 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:55:27.348844 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:55:27.348861 | orchestrator | 2025-08-29 14:55:27.348877 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:55:27.348894 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-08-29 14:55:27.348911 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-08-29 14:55:27.348939 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-08-29 14:55:27.348957 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 14:55:27.348973 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 14:55:27.348997 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 14:55:27.349014 | orchestrator | 2025-08-29 14:55:27.349031 | orchestrator | 2025-08-29 14:55:27.349047 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:55:27.349061 | orchestrator | Friday 29 August 2025 14:55:25 +0000 (0:00:08.468) 0:01:13.613 ********* 2025-08-29 14:55:27.349075 | orchestrator | =============================================================================== 2025-08-29 14:55:27.349101 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 20.89s 2025-08-29 14:55:27.349116 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 12.79s 2025-08-29 14:55:27.349203 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 9.57s 2025-08-29 14:55:27.349222 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.77s 2025-08-29 14:55:27.349238 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.70s 2025-08-29 14:55:27.349255 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 3.40s 2025-08-29 14:55:27.349272 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 3.00s 2025-08-29 14:55:27.349289 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 2.33s 2025-08-29 14:55:27.349305 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 2.29s 2025-08-29 14:55:27.349320 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.10s 2025-08-29 14:55:27.349336 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.63s 2025-08-29 14:55:27.349352 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.43s 2025-08-29 14:55:27.349367 | orchestrator | module-load : Load modules ---------------------------------------------- 1.35s 2025-08-29 14:55:27.349383 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.31s 2025-08-29 14:55:27.349401 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.20s 2025-08-29 14:55:27.349417 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.05s 2025-08-29 14:55:27.349433 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.83s 2025-08-29 14:55:27.349449 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.59s 2025-08-29 14:55:27.349466 | orchestrator | 2025-08-29 14:55:27 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:55:27.349483 | orchestrator | 2025-08-29 14:55:27 | INFO  | Task 3b6b6575-4bde-4175-880d-3e340477ece2 is in state STARTED 2025-08-29 14:55:27.349496 | orchestrator | 2025-08-29 14:55:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:30.390809 | orchestrator | 2025-08-29 14:55:30 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:55:30.391333 | orchestrator | 2025-08-29 14:55:30 | INFO  | Task c7dd3c80-30cb-48a6-95ab-20087ca8ed2a is in state STARTED 2025-08-29 14:55:30.394083 | orchestrator | 2025-08-29 14:55:30 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:55:30.395194 | orchestrator | 2025-08-29 14:55:30 | INFO  | Task 3b6b6575-4bde-4175-880d-3e340477ece2 is in state STARTED 2025-08-29 14:55:30.395219 | orchestrator | 2025-08-29 14:55:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:33.425770 | orchestrator | 2025-08-29 14:55:33 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:55:33.425867 | orchestrator | 2025-08-29 14:55:33 | INFO  | Task c7dd3c80-30cb-48a6-95ab-20087ca8ed2a is in state STARTED 2025-08-29 14:55:33.426262 | orchestrator | 2025-08-29 14:55:33 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:55:33.426933 | orchestrator | 2025-08-29 14:55:33 | INFO  | Task 3b6b6575-4bde-4175-880d-3e340477ece2 is in state STARTED 2025-08-29 14:55:33.426967 | orchestrator | 2025-08-29 14:55:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:36.481905 | orchestrator | 2025-08-29 14:55:36 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:55:36.482739 | orchestrator | 2025-08-29 14:55:36 | INFO  | Task c7dd3c80-30cb-48a6-95ab-20087ca8ed2a is in state STARTED 2025-08-29 14:55:36.483447 | orchestrator | 2025-08-29 14:55:36 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:55:36.484019 | orchestrator | 2025-08-29 14:55:36 | INFO  | Task 3b6b6575-4bde-4175-880d-3e340477ece2 is in state STARTED 2025-08-29 14:55:36.484065 | orchestrator | 2025-08-29 14:55:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:39.510133 | orchestrator | 2025-08-29 14:55:39 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:55:39.510899 | orchestrator | 2025-08-29 14:55:39 | INFO  | Task c7dd3c80-30cb-48a6-95ab-20087ca8ed2a is in state STARTED 2025-08-29 14:55:39.512273 | orchestrator | 2025-08-29 14:55:39 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:55:39.513323 | orchestrator | 2025-08-29 14:55:39 | INFO  | Task 3b6b6575-4bde-4175-880d-3e340477ece2 is in state STARTED 2025-08-29 14:55:39.513341 | orchestrator | 2025-08-29 14:55:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:42.548346 | orchestrator | 2025-08-29 14:55:42 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:55:42.548456 | orchestrator | 2025-08-29 14:55:42 | INFO  | Task c7dd3c80-30cb-48a6-95ab-20087ca8ed2a is in state STARTED 2025-08-29 14:55:42.549873 | orchestrator | 2025-08-29 14:55:42 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:55:42.551756 | orchestrator | 2025-08-29 14:55:42 | INFO  | Task 3b6b6575-4bde-4175-880d-3e340477ece2 is in state STARTED 2025-08-29 14:55:42.551843 | orchestrator | 2025-08-29 14:55:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:45.591589 | orchestrator | 2025-08-29 14:55:45 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:55:45.592293 | orchestrator | 2025-08-29 14:55:45 | INFO  | Task c7dd3c80-30cb-48a6-95ab-20087ca8ed2a is in state STARTED 2025-08-29 14:55:45.593142 | orchestrator | 2025-08-29 14:55:45 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:55:45.593861 | orchestrator | 2025-08-29 14:55:45 | INFO  | Task 3b6b6575-4bde-4175-880d-3e340477ece2 is in state STARTED 2025-08-29 14:55:45.593889 | orchestrator | 2025-08-29 14:55:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:48.627218 | orchestrator | 2025-08-29 14:55:48 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:55:48.627873 | orchestrator | 2025-08-29 14:55:48 | INFO  | Task c7dd3c80-30cb-48a6-95ab-20087ca8ed2a is in state STARTED 2025-08-29 14:55:48.629232 | orchestrator | 2025-08-29 14:55:48 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:55:48.630409 | orchestrator | 2025-08-29 14:55:48 | INFO  | Task 3b6b6575-4bde-4175-880d-3e340477ece2 is in state STARTED 2025-08-29 14:55:48.630626 | orchestrator | 2025-08-29 14:55:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:51.672303 | orchestrator | 2025-08-29 14:55:51 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:55:51.673405 | orchestrator | 2025-08-29 14:55:51 | INFO  | Task c7dd3c80-30cb-48a6-95ab-20087ca8ed2a is in state STARTED 2025-08-29 14:55:51.674375 | orchestrator | 2025-08-29 14:55:51 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:55:51.675270 | orchestrator | 2025-08-29 14:55:51 | INFO  | Task 3b6b6575-4bde-4175-880d-3e340477ece2 is in state STARTED 2025-08-29 14:55:51.675319 | orchestrator | 2025-08-29 14:55:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:54.717907 | orchestrator | 2025-08-29 14:55:54 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:55:54.720573 | orchestrator | 2025-08-29 14:55:54 | INFO  | Task c7dd3c80-30cb-48a6-95ab-20087ca8ed2a is in state STARTED 2025-08-29 14:55:54.723858 | orchestrator | 2025-08-29 14:55:54 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:55:54.726302 | orchestrator | 2025-08-29 14:55:54 | INFO  | Task 3b6b6575-4bde-4175-880d-3e340477ece2 is in state STARTED 2025-08-29 14:55:54.726937 | orchestrator | 2025-08-29 14:55:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:55:57.769980 | orchestrator | 2025-08-29 14:55:57 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:55:57.770266 | orchestrator | 2025-08-29 14:55:57 | INFO  | Task c7dd3c80-30cb-48a6-95ab-20087ca8ed2a is in state STARTED 2025-08-29 14:55:57.771222 | orchestrator | 2025-08-29 14:55:57 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:55:57.773380 | orchestrator | 2025-08-29 14:55:57 | INFO  | Task 3b6b6575-4bde-4175-880d-3e340477ece2 is in state STARTED 2025-08-29 14:55:57.773416 | orchestrator | 2025-08-29 14:55:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:00.813896 | orchestrator | 2025-08-29 14:56:00 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:56:00.814448 | orchestrator | 2025-08-29 14:56:00 | INFO  | Task c7dd3c80-30cb-48a6-95ab-20087ca8ed2a is in state STARTED 2025-08-29 14:56:00.815151 | orchestrator | 2025-08-29 14:56:00 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:56:00.816370 | orchestrator | 2025-08-29 14:56:00 | INFO  | Task 3b6b6575-4bde-4175-880d-3e340477ece2 is in state STARTED 2025-08-29 14:56:00.816416 | orchestrator | 2025-08-29 14:56:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:03.861931 | orchestrator | 2025-08-29 14:56:03 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:56:03.866071 | orchestrator | 2025-08-29 14:56:03 | INFO  | Task c7dd3c80-30cb-48a6-95ab-20087ca8ed2a is in state STARTED 2025-08-29 14:56:03.866168 | orchestrator | 2025-08-29 14:56:03 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:56:03.866190 | orchestrator | 2025-08-29 14:56:03 | INFO  | Task 3b6b6575-4bde-4175-880d-3e340477ece2 is in state STARTED 2025-08-29 14:56:03.866219 | orchestrator | 2025-08-29 14:56:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:06.903306 | orchestrator | 2025-08-29 14:56:06 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:56:06.903433 | orchestrator | 2025-08-29 14:56:06 | INFO  | Task c7dd3c80-30cb-48a6-95ab-20087ca8ed2a is in state STARTED 2025-08-29 14:56:06.904298 | orchestrator | 2025-08-29 14:56:06 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:56:06.904980 | orchestrator | 2025-08-29 14:56:06 | INFO  | Task 3b6b6575-4bde-4175-880d-3e340477ece2 is in state STARTED 2025-08-29 14:56:06.905185 | orchestrator | 2025-08-29 14:56:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:09.955217 | orchestrator | 2025-08-29 14:56:09 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:56:09.957645 | orchestrator | 2025-08-29 14:56:09 | INFO  | Task c7dd3c80-30cb-48a6-95ab-20087ca8ed2a is in state STARTED 2025-08-29 14:56:09.960182 | orchestrator | 2025-08-29 14:56:09 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:56:09.962692 | orchestrator | 2025-08-29 14:56:09 | INFO  | Task 3b6b6575-4bde-4175-880d-3e340477ece2 is in state STARTED 2025-08-29 14:56:09.963048 | orchestrator | 2025-08-29 14:56:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:13.010367 | orchestrator | 2025-08-29 14:56:13 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:56:13.012041 | orchestrator | 2025-08-29 14:56:13 | INFO  | Task c7dd3c80-30cb-48a6-95ab-20087ca8ed2a is in state STARTED 2025-08-29 14:56:13.013838 | orchestrator | 2025-08-29 14:56:13 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:56:13.016762 | orchestrator | 2025-08-29 14:56:13 | INFO  | Task 3b6b6575-4bde-4175-880d-3e340477ece2 is in state STARTED 2025-08-29 14:56:13.016807 | orchestrator | 2025-08-29 14:56:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:16.076964 | orchestrator | 2025-08-29 14:56:16 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:56:16.077534 | orchestrator | 2025-08-29 14:56:16 | INFO  | Task c7dd3c80-30cb-48a6-95ab-20087ca8ed2a is in state STARTED 2025-08-29 14:56:16.078376 | orchestrator | 2025-08-29 14:56:16 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:56:16.079196 | orchestrator | 2025-08-29 14:56:16 | INFO  | Task 3b6b6575-4bde-4175-880d-3e340477ece2 is in state STARTED 2025-08-29 14:56:16.079222 | orchestrator | 2025-08-29 14:56:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:19.108332 | orchestrator | 2025-08-29 14:56:19 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:56:19.111252 | orchestrator | 2025-08-29 14:56:19 | INFO  | Task c7dd3c80-30cb-48a6-95ab-20087ca8ed2a is in state STARTED 2025-08-29 14:56:19.114265 | orchestrator | 2025-08-29 14:56:19 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:56:19.116927 | orchestrator | 2025-08-29 14:56:19 | INFO  | Task 3b6b6575-4bde-4175-880d-3e340477ece2 is in state STARTED 2025-08-29 14:56:19.116951 | orchestrator | 2025-08-29 14:56:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:22.149557 | orchestrator | 2025-08-29 14:56:22 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:56:22.150167 | orchestrator | 2025-08-29 14:56:22 | INFO  | Task c7dd3c80-30cb-48a6-95ab-20087ca8ed2a is in state STARTED 2025-08-29 14:56:22.150974 | orchestrator | 2025-08-29 14:56:22 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:56:22.152135 | orchestrator | 2025-08-29 14:56:22 | INFO  | Task 3b6b6575-4bde-4175-880d-3e340477ece2 is in state STARTED 2025-08-29 14:56:22.152189 | orchestrator | 2025-08-29 14:56:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:25.205780 | orchestrator | 2025-08-29 14:56:25 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:56:25.207476 | orchestrator | 2025-08-29 14:56:25 | INFO  | Task c7dd3c80-30cb-48a6-95ab-20087ca8ed2a is in state STARTED 2025-08-29 14:56:25.209368 | orchestrator | 2025-08-29 14:56:25 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:56:25.210871 | orchestrator | 2025-08-29 14:56:25 | INFO  | Task 3b6b6575-4bde-4175-880d-3e340477ece2 is in state STARTED 2025-08-29 14:56:25.211054 | orchestrator | 2025-08-29 14:56:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:28.268442 | orchestrator | 2025-08-29 14:56:28 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:56:28.268561 | orchestrator | 2025-08-29 14:56:28 | INFO  | Task c7dd3c80-30cb-48a6-95ab-20087ca8ed2a is in state STARTED 2025-08-29 14:56:28.268607 | orchestrator | 2025-08-29 14:56:28 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:56:28.271669 | orchestrator | 2025-08-29 14:56:28 | INFO  | Task 3b6b6575-4bde-4175-880d-3e340477ece2 is in state STARTED 2025-08-29 14:56:28.271751 | orchestrator | 2025-08-29 14:56:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:31.313961 | orchestrator | 2025-08-29 14:56:31 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:56:31.314168 | orchestrator | 2025-08-29 14:56:31 | INFO  | Task c7dd3c80-30cb-48a6-95ab-20087ca8ed2a is in state STARTED 2025-08-29 14:56:31.317353 | orchestrator | 2025-08-29 14:56:31 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:56:31.318935 | orchestrator | 2025-08-29 14:56:31 | INFO  | Task 3b6b6575-4bde-4175-880d-3e340477ece2 is in state STARTED 2025-08-29 14:56:31.319150 | orchestrator | 2025-08-29 14:56:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:34.368480 | orchestrator | 2025-08-29 14:56:34 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:56:34.368578 | orchestrator | 2025-08-29 14:56:34 | INFO  | Task c7dd3c80-30cb-48a6-95ab-20087ca8ed2a is in state STARTED 2025-08-29 14:56:34.368598 | orchestrator | 2025-08-29 14:56:34 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:56:34.369002 | orchestrator | 2025-08-29 14:56:34 | INFO  | Task 3b6b6575-4bde-4175-880d-3e340477ece2 is in state STARTED 2025-08-29 14:56:34.369037 | orchestrator | 2025-08-29 14:56:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:37.407614 | orchestrator | 2025-08-29 14:56:37 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:56:37.409842 | orchestrator | 2025-08-29 14:56:37 | INFO  | Task c7dd3c80-30cb-48a6-95ab-20087ca8ed2a is in state STARTED 2025-08-29 14:56:37.412245 | orchestrator | 2025-08-29 14:56:37 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:56:37.414349 | orchestrator | 2025-08-29 14:56:37 | INFO  | Task 3b6b6575-4bde-4175-880d-3e340477ece2 is in state STARTED 2025-08-29 14:56:37.414720 | orchestrator | 2025-08-29 14:56:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:40.443915 | orchestrator | 2025-08-29 14:56:40 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:56:40.445016 | orchestrator | 2025-08-29 14:56:40 | INFO  | Task c7dd3c80-30cb-48a6-95ab-20087ca8ed2a is in state STARTED 2025-08-29 14:56:40.446810 | orchestrator | 2025-08-29 14:56:40 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:56:40.447937 | orchestrator | 2025-08-29 14:56:40 | INFO  | Task 3b6b6575-4bde-4175-880d-3e340477ece2 is in state STARTED 2025-08-29 14:56:40.448180 | orchestrator | 2025-08-29 14:56:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:43.486205 | orchestrator | 2025-08-29 14:56:43 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:56:43.486655 | orchestrator | 2025-08-29 14:56:43 | INFO  | Task c7dd3c80-30cb-48a6-95ab-20087ca8ed2a is in state STARTED 2025-08-29 14:56:43.487992 | orchestrator | 2025-08-29 14:56:43 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:56:43.489422 | orchestrator | 2025-08-29 14:56:43 | INFO  | Task 3b6b6575-4bde-4175-880d-3e340477ece2 is in state STARTED 2025-08-29 14:56:43.489463 | orchestrator | 2025-08-29 14:56:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:46.521889 | orchestrator | 2025-08-29 14:56:46 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:56:46.522902 | orchestrator | 2025-08-29 14:56:46 | INFO  | Task c7dd3c80-30cb-48a6-95ab-20087ca8ed2a is in state STARTED 2025-08-29 14:56:46.523551 | orchestrator | 2025-08-29 14:56:46 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:56:46.524680 | orchestrator | 2025-08-29 14:56:46 | INFO  | Task 3b6b6575-4bde-4175-880d-3e340477ece2 is in state STARTED 2025-08-29 14:56:46.524715 | orchestrator | 2025-08-29 14:56:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:49.565389 | orchestrator | 2025-08-29 14:56:49 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:56:49.566439 | orchestrator | 2025-08-29 14:56:49 | INFO  | Task c7dd3c80-30cb-48a6-95ab-20087ca8ed2a is in state STARTED 2025-08-29 14:56:49.569371 | orchestrator | 2025-08-29 14:56:49 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:56:49.570831 | orchestrator | 2025-08-29 14:56:49 | INFO  | Task 3b6b6575-4bde-4175-880d-3e340477ece2 is in state STARTED 2025-08-29 14:56:49.570918 | orchestrator | 2025-08-29 14:56:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:52.614332 | orchestrator | 2025-08-29 14:56:52 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:56:52.616297 | orchestrator | 2025-08-29 14:56:52 | INFO  | Task c7dd3c80-30cb-48a6-95ab-20087ca8ed2a is in state STARTED 2025-08-29 14:56:52.618908 | orchestrator | 2025-08-29 14:56:52 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:56:52.621654 | orchestrator | 2025-08-29 14:56:52 | INFO  | Task 3b6b6575-4bde-4175-880d-3e340477ece2 is in state STARTED 2025-08-29 14:56:52.621863 | orchestrator | 2025-08-29 14:56:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:55.672531 | orchestrator | 2025-08-29 14:56:55 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:56:55.672845 | orchestrator | 2025-08-29 14:56:55 | INFO  | Task c7dd3c80-30cb-48a6-95ab-20087ca8ed2a is in state STARTED 2025-08-29 14:56:55.673643 | orchestrator | 2025-08-29 14:56:55 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:56:55.674581 | orchestrator | 2025-08-29 14:56:55 | INFO  | Task 3b6b6575-4bde-4175-880d-3e340477ece2 is in state STARTED 2025-08-29 14:56:55.674618 | orchestrator | 2025-08-29 14:56:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:56:58.708935 | orchestrator | 2025-08-29 14:56:58 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:56:58.711569 | orchestrator | 2025-08-29 14:56:58 | INFO  | Task c7dd3c80-30cb-48a6-95ab-20087ca8ed2a is in state STARTED 2025-08-29 14:56:58.713674 | orchestrator | 2025-08-29 14:56:58 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:56:58.715590 | orchestrator | 2025-08-29 14:56:58 | INFO  | Task 3b6b6575-4bde-4175-880d-3e340477ece2 is in state STARTED 2025-08-29 14:56:58.715759 | orchestrator | 2025-08-29 14:56:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:01.748518 | orchestrator | 2025-08-29 14:57:01 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:57:01.750161 | orchestrator | 2025-08-29 14:57:01 | INFO  | Task c7dd3c80-30cb-48a6-95ab-20087ca8ed2a is in state SUCCESS 2025-08-29 14:57:01.750314 | orchestrator | 2025-08-29 14:57:01.752605 | orchestrator | 2025-08-29 14:57:01.752653 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-08-29 14:57:01.752666 | orchestrator | 2025-08-29 14:57:01.752678 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-08-29 14:57:01.752707 | orchestrator | Friday 29 August 2025 14:54:36 +0000 (0:00:00.296) 0:00:00.296 ********* 2025-08-29 14:57:01.752719 | orchestrator | ok: [localhost] => { 2025-08-29 14:57:01.752731 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-08-29 14:57:01.752743 | orchestrator | } 2025-08-29 14:57:01.752754 | orchestrator | 2025-08-29 14:57:01.752765 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-08-29 14:57:01.752776 | orchestrator | Friday 29 August 2025 14:54:36 +0000 (0:00:00.049) 0:00:00.345 ********* 2025-08-29 14:57:01.752788 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-08-29 14:57:01.752799 | orchestrator | ...ignoring 2025-08-29 14:57:01.752811 | orchestrator | 2025-08-29 14:57:01.752822 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-08-29 14:57:01.752839 | orchestrator | Friday 29 August 2025 14:54:39 +0000 (0:00:03.319) 0:00:03.665 ********* 2025-08-29 14:57:01.752850 | orchestrator | skipping: [localhost] 2025-08-29 14:57:01.752870 | orchestrator | 2025-08-29 14:57:01.752889 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-08-29 14:57:01.752907 | orchestrator | Friday 29 August 2025 14:54:40 +0000 (0:00:00.095) 0:00:03.760 ********* 2025-08-29 14:57:01.752925 | orchestrator | ok: [localhost] 2025-08-29 14:57:01.752943 | orchestrator | 2025-08-29 14:57:01.752959 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 14:57:01.752977 | orchestrator | 2025-08-29 14:57:01.752994 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 14:57:01.753015 | orchestrator | Friday 29 August 2025 14:54:40 +0000 (0:00:00.192) 0:00:03.953 ********* 2025-08-29 14:57:01.753029 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:57:01.753040 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:57:01.753120 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:57:01.753135 | orchestrator | 2025-08-29 14:57:01.753155 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 14:57:01.753174 | orchestrator | Friday 29 August 2025 14:54:40 +0000 (0:00:00.333) 0:00:04.286 ********* 2025-08-29 14:57:01.753193 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-08-29 14:57:01.753212 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-08-29 14:57:01.753231 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-08-29 14:57:01.753249 | orchestrator | 2025-08-29 14:57:01.753269 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-08-29 14:57:01.753290 | orchestrator | 2025-08-29 14:57:01.753310 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-08-29 14:57:01.753329 | orchestrator | Friday 29 August 2025 14:54:41 +0000 (0:00:00.637) 0:00:04.924 ********* 2025-08-29 14:57:01.753346 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:57:01.753359 | orchestrator | 2025-08-29 14:57:01.753371 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-08-29 14:57:01.753384 | orchestrator | Friday 29 August 2025 14:54:41 +0000 (0:00:00.483) 0:00:05.408 ********* 2025-08-29 14:57:01.753397 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:57:01.753409 | orchestrator | 2025-08-29 14:57:01.753421 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-08-29 14:57:01.753431 | orchestrator | Friday 29 August 2025 14:54:42 +0000 (0:00:01.116) 0:00:06.525 ********* 2025-08-29 14:57:01.753442 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:01.753453 | orchestrator | 2025-08-29 14:57:01.753472 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-08-29 14:57:01.753487 | orchestrator | Friday 29 August 2025 14:54:43 +0000 (0:00:00.446) 0:00:06.971 ********* 2025-08-29 14:57:01.753502 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:01.753542 | orchestrator | 2025-08-29 14:57:01.753565 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-08-29 14:57:01.753583 | orchestrator | Friday 29 August 2025 14:54:43 +0000 (0:00:00.559) 0:00:07.531 ********* 2025-08-29 14:57:01.753600 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:01.753667 | orchestrator | 2025-08-29 14:57:01.753691 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-08-29 14:57:01.753710 | orchestrator | Friday 29 August 2025 14:54:44 +0000 (0:00:00.463) 0:00:07.995 ********* 2025-08-29 14:57:01.753730 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:01.753748 | orchestrator | 2025-08-29 14:57:01.753814 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-08-29 14:57:01.753828 | orchestrator | Friday 29 August 2025 14:54:44 +0000 (0:00:00.461) 0:00:08.456 ********* 2025-08-29 14:57:01.753840 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:57:01.753851 | orchestrator | 2025-08-29 14:57:01.753862 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-08-29 14:57:01.753872 | orchestrator | Friday 29 August 2025 14:54:46 +0000 (0:00:01.768) 0:00:10.225 ********* 2025-08-29 14:57:01.753883 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:57:01.753894 | orchestrator | 2025-08-29 14:57:01.753905 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-08-29 14:57:01.753916 | orchestrator | Friday 29 August 2025 14:54:48 +0000 (0:00:01.574) 0:00:11.800 ********* 2025-08-29 14:57:01.753927 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:01.753937 | orchestrator | 2025-08-29 14:57:01.753948 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-08-29 14:57:01.753959 | orchestrator | Friday 29 August 2025 14:54:49 +0000 (0:00:01.393) 0:00:13.193 ********* 2025-08-29 14:57:01.753969 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:01.753980 | orchestrator | 2025-08-29 14:57:01.754012 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-08-29 14:57:01.754107 | orchestrator | Friday 29 August 2025 14:54:50 +0000 (0:00:00.778) 0:00:13.972 ********* 2025-08-29 14:57:01.754131 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 14:57:01.754148 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 14:57:01.754172 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 14:57:01.754184 | orchestrator | 2025-08-29 14:57:01.754196 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-08-29 14:57:01.754207 | orchestrator | Friday 29 August 2025 14:54:52 +0000 (0:00:01.780) 0:00:15.752 ********* 2025-08-29 14:57:01.754229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 14:57:01.754247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 14:57:01.754259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 14:57:01.754277 | orchestrator | 2025-08-29 14:57:01.754288 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-08-29 14:57:01.754299 | orchestrator | Friday 29 August 2025 14:54:55 +0000 (0:00:03.166) 0:00:18.919 ********* 2025-08-29 14:57:01.754310 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-08-29 14:57:01.754322 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-08-29 14:57:01.754333 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-08-29 14:57:01.754343 | orchestrator | 2025-08-29 14:57:01.754354 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-08-29 14:57:01.754365 | orchestrator | Friday 29 August 2025 14:54:56 +0000 (0:00:01.472) 0:00:20.392 ********* 2025-08-29 14:57:01.754375 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-08-29 14:57:01.754386 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-08-29 14:57:01.754397 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-08-29 14:57:01.754408 | orchestrator | 2025-08-29 14:57:01.754418 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-08-29 14:57:01.754429 | orchestrator | Friday 29 August 2025 14:54:59 +0000 (0:00:02.663) 0:00:23.056 ********* 2025-08-29 14:57:01.754440 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-08-29 14:57:01.754451 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-08-29 14:57:01.754462 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-08-29 14:57:01.754472 | orchestrator | 2025-08-29 14:57:01.754483 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-08-29 14:57:01.754494 | orchestrator | Friday 29 August 2025 14:55:01 +0000 (0:00:02.092) 0:00:25.148 ********* 2025-08-29 14:57:01.754511 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-08-29 14:57:01.754523 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-08-29 14:57:01.754533 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-08-29 14:57:01.754544 | orchestrator | 2025-08-29 14:57:01.754555 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-08-29 14:57:01.754566 | orchestrator | Friday 29 August 2025 14:55:04 +0000 (0:00:03.410) 0:00:28.559 ********* 2025-08-29 14:57:01.754576 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-08-29 14:57:01.754587 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-08-29 14:57:01.754598 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-08-29 14:57:01.754609 | orchestrator | 2025-08-29 14:57:01.754620 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-08-29 14:57:01.754641 | orchestrator | Friday 29 August 2025 14:55:06 +0000 (0:00:01.824) 0:00:30.384 ********* 2025-08-29 14:57:01.754652 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-08-29 14:57:01.754663 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-08-29 14:57:01.754673 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-08-29 14:57:01.754684 | orchestrator | 2025-08-29 14:57:01.754695 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-08-29 14:57:01.754706 | orchestrator | Friday 29 August 2025 14:55:08 +0000 (0:00:02.213) 0:00:32.597 ********* 2025-08-29 14:57:01.754717 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:01.754727 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:01.754743 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:01.754762 | orchestrator | 2025-08-29 14:57:01.754780 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-08-29 14:57:01.754798 | orchestrator | Friday 29 August 2025 14:55:09 +0000 (0:00:00.612) 0:00:33.210 ********* 2025-08-29 14:57:01.754818 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 14:57:01.754841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 14:57:01.754875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 14:57:01.754908 | orchestrator | 2025-08-29 14:57:01.754928 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-08-29 14:57:01.754956 | orchestrator | Friday 29 August 2025 14:55:12 +0000 (0:00:02.602) 0:00:35.813 ********* 2025-08-29 14:57:01.754976 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:57:01.754992 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:57:01.755002 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:57:01.755013 | orchestrator | 2025-08-29 14:57:01.755024 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-08-29 14:57:01.755035 | orchestrator | Friday 29 August 2025 14:55:13 +0000 (0:00:00.930) 0:00:36.743 ********* 2025-08-29 14:57:01.755046 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:57:01.755081 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:57:01.755093 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:57:01.755104 | orchestrator | 2025-08-29 14:57:01.755115 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-08-29 14:57:01.755126 | orchestrator | Friday 29 August 2025 14:55:20 +0000 (0:00:07.592) 0:00:44.335 ********* 2025-08-29 14:57:01.755137 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:57:01.755147 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:57:01.755158 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:57:01.755169 | orchestrator | 2025-08-29 14:57:01.755179 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-08-29 14:57:01.755190 | orchestrator | 2025-08-29 14:57:01.755201 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-08-29 14:57:01.755211 | orchestrator | Friday 29 August 2025 14:55:21 +0000 (0:00:00.729) 0:00:45.064 ********* 2025-08-29 14:57:01.755222 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:57:01.755233 | orchestrator | 2025-08-29 14:57:01.755243 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-08-29 14:57:01.755254 | orchestrator | Friday 29 August 2025 14:55:22 +0000 (0:00:00.638) 0:00:45.703 ********* 2025-08-29 14:57:01.755265 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:57:01.755275 | orchestrator | 2025-08-29 14:57:01.755286 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-08-29 14:57:01.755297 | orchestrator | Friday 29 August 2025 14:55:22 +0000 (0:00:00.326) 0:00:46.029 ********* 2025-08-29 14:57:01.755307 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:57:01.755324 | orchestrator | 2025-08-29 14:57:01.755350 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-08-29 14:57:01.755372 | orchestrator | Friday 29 August 2025 14:55:23 +0000 (0:00:01.588) 0:00:47.618 ********* 2025-08-29 14:57:01.755391 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:57:01.755409 | orchestrator | 2025-08-29 14:57:01.755428 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-08-29 14:57:01.755448 | orchestrator | 2025-08-29 14:57:01.755466 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-08-29 14:57:01.755482 | orchestrator | Friday 29 August 2025 14:56:18 +0000 (0:00:54.861) 0:01:42.480 ********* 2025-08-29 14:57:01.755493 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:57:01.755504 | orchestrator | 2025-08-29 14:57:01.755515 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-08-29 14:57:01.755525 | orchestrator | Friday 29 August 2025 14:56:19 +0000 (0:00:00.620) 0:01:43.100 ********* 2025-08-29 14:57:01.755536 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:57:01.755547 | orchestrator | 2025-08-29 14:57:01.755558 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-08-29 14:57:01.755569 | orchestrator | Friday 29 August 2025 14:56:19 +0000 (0:00:00.443) 0:01:43.543 ********* 2025-08-29 14:57:01.755589 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:57:01.755600 | orchestrator | 2025-08-29 14:57:01.755610 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-08-29 14:57:01.755621 | orchestrator | Friday 29 August 2025 14:56:21 +0000 (0:00:01.903) 0:01:45.447 ********* 2025-08-29 14:57:01.755632 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:57:01.755643 | orchestrator | 2025-08-29 14:57:01.755654 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-08-29 14:57:01.755665 | orchestrator | 2025-08-29 14:57:01.755676 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-08-29 14:57:01.755686 | orchestrator | Friday 29 August 2025 14:56:37 +0000 (0:00:15.431) 0:02:00.878 ********* 2025-08-29 14:57:01.755697 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:57:01.755708 | orchestrator | 2025-08-29 14:57:01.755719 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-08-29 14:57:01.755730 | orchestrator | Friday 29 August 2025 14:56:37 +0000 (0:00:00.672) 0:02:01.551 ********* 2025-08-29 14:57:01.755741 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:57:01.755751 | orchestrator | 2025-08-29 14:57:01.755762 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-08-29 14:57:01.755773 | orchestrator | Friday 29 August 2025 14:56:38 +0000 (0:00:00.240) 0:02:01.792 ********* 2025-08-29 14:57:01.755784 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:57:01.755795 | orchestrator | 2025-08-29 14:57:01.755806 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-08-29 14:57:01.755828 | orchestrator | Friday 29 August 2025 14:56:39 +0000 (0:00:01.606) 0:02:03.398 ********* 2025-08-29 14:57:01.755839 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:57:01.755850 | orchestrator | 2025-08-29 14:57:01.755861 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-08-29 14:57:01.755872 | orchestrator | 2025-08-29 14:57:01.755882 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-08-29 14:57:01.755893 | orchestrator | Friday 29 August 2025 14:56:55 +0000 (0:00:16.153) 0:02:19.551 ********* 2025-08-29 14:57:01.755904 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:57:01.755915 | orchestrator | 2025-08-29 14:57:01.755934 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-08-29 14:57:01.755953 | orchestrator | Friday 29 August 2025 14:56:56 +0000 (0:00:00.922) 0:02:20.473 ********* 2025-08-29 14:57:01.755984 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-08-29 14:57:01.756004 | orchestrator | enable_outward_rabbitmq_True 2025-08-29 14:57:01.756022 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-08-29 14:57:01.756039 | orchestrator | outward_rabbitmq_restart 2025-08-29 14:57:01.756080 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:57:01.756101 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:57:01.756128 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:57:01.756148 | orchestrator | 2025-08-29 14:57:01.756167 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-08-29 14:57:01.756187 | orchestrator | skipping: no hosts matched 2025-08-29 14:57:01.756201 | orchestrator | 2025-08-29 14:57:01.756212 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-08-29 14:57:01.756223 | orchestrator | skipping: no hosts matched 2025-08-29 14:57:01.756234 | orchestrator | 2025-08-29 14:57:01.756244 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-08-29 14:57:01.756255 | orchestrator | skipping: no hosts matched 2025-08-29 14:57:01.756265 | orchestrator | 2025-08-29 14:57:01.756276 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:57:01.756288 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-08-29 14:57:01.756300 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-08-29 14:57:01.756320 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:57:01.756331 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 14:57:01.756342 | orchestrator | 2025-08-29 14:57:01.756353 | orchestrator | 2025-08-29 14:57:01.756364 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:57:01.756375 | orchestrator | Friday 29 August 2025 14:56:59 +0000 (0:00:02.423) 0:02:22.897 ********* 2025-08-29 14:57:01.756385 | orchestrator | =============================================================================== 2025-08-29 14:57:01.756396 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 86.45s 2025-08-29 14:57:01.756407 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.59s 2025-08-29 14:57:01.756417 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 5.10s 2025-08-29 14:57:01.756428 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 3.41s 2025-08-29 14:57:01.756439 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.32s 2025-08-29 14:57:01.756450 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 3.17s 2025-08-29 14:57:01.756460 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.66s 2025-08-29 14:57:01.756471 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 2.60s 2025-08-29 14:57:01.756482 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.42s 2025-08-29 14:57:01.756492 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.21s 2025-08-29 14:57:01.756503 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 2.09s 2025-08-29 14:57:01.756513 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.93s 2025-08-29 14:57:01.756524 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.82s 2025-08-29 14:57:01.756535 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.78s 2025-08-29 14:57:01.756545 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.77s 2025-08-29 14:57:01.756556 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.57s 2025-08-29 14:57:01.756567 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.47s 2025-08-29 14:57:01.756577 | orchestrator | rabbitmq : List RabbitMQ policies --------------------------------------- 1.39s 2025-08-29 14:57:01.756588 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.12s 2025-08-29 14:57:01.756599 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 1.01s 2025-08-29 14:57:01.756610 | orchestrator | 2025-08-29 14:57:01 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:57:01.756621 | orchestrator | 2025-08-29 14:57:01 | INFO  | Task 3b6b6575-4bde-4175-880d-3e340477ece2 is in state STARTED 2025-08-29 14:57:01.756641 | orchestrator | 2025-08-29 14:57:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:04.834605 | orchestrator | 2025-08-29 14:57:04 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:57:04.839948 | orchestrator | 2025-08-29 14:57:04 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:57:04.842663 | orchestrator | 2025-08-29 14:57:04 | INFO  | Task 3b6b6575-4bde-4175-880d-3e340477ece2 is in state STARTED 2025-08-29 14:57:04.842730 | orchestrator | 2025-08-29 14:57:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:07.883042 | orchestrator | 2025-08-29 14:57:07 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:57:07.883477 | orchestrator | 2025-08-29 14:57:07 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:57:07.884462 | orchestrator | 2025-08-29 14:57:07 | INFO  | Task 3b6b6575-4bde-4175-880d-3e340477ece2 is in state STARTED 2025-08-29 14:57:07.884500 | orchestrator | 2025-08-29 14:57:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:10.921224 | orchestrator | 2025-08-29 14:57:10 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:57:10.928178 | orchestrator | 2025-08-29 14:57:10 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:57:10.929973 | orchestrator | 2025-08-29 14:57:10 | INFO  | Task 3b6b6575-4bde-4175-880d-3e340477ece2 is in state STARTED 2025-08-29 14:57:10.929991 | orchestrator | 2025-08-29 14:57:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:13.980148 | orchestrator | 2025-08-29 14:57:13 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:57:13.981719 | orchestrator | 2025-08-29 14:57:13 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:57:13.983303 | orchestrator | 2025-08-29 14:57:13 | INFO  | Task 3b6b6575-4bde-4175-880d-3e340477ece2 is in state STARTED 2025-08-29 14:57:13.983410 | orchestrator | 2025-08-29 14:57:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:17.034283 | orchestrator | 2025-08-29 14:57:17 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:57:17.037065 | orchestrator | 2025-08-29 14:57:17 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:57:17.039437 | orchestrator | 2025-08-29 14:57:17 | INFO  | Task 3b6b6575-4bde-4175-880d-3e340477ece2 is in state STARTED 2025-08-29 14:57:17.039677 | orchestrator | 2025-08-29 14:57:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:20.090235 | orchestrator | 2025-08-29 14:57:20 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:57:20.090294 | orchestrator | 2025-08-29 14:57:20 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:57:20.090444 | orchestrator | 2025-08-29 14:57:20 | INFO  | Task 3b6b6575-4bde-4175-880d-3e340477ece2 is in state STARTED 2025-08-29 14:57:20.090486 | orchestrator | 2025-08-29 14:57:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:23.135954 | orchestrator | 2025-08-29 14:57:23 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:57:23.141002 | orchestrator | 2025-08-29 14:57:23 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:57:23.145096 | orchestrator | 2025-08-29 14:57:23 | INFO  | Task 3b6b6575-4bde-4175-880d-3e340477ece2 is in state STARTED 2025-08-29 14:57:23.145135 | orchestrator | 2025-08-29 14:57:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:26.185975 | orchestrator | 2025-08-29 14:57:26 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:57:26.187699 | orchestrator | 2025-08-29 14:57:26 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:57:26.188956 | orchestrator | 2025-08-29 14:57:26 | INFO  | Task 3b6b6575-4bde-4175-880d-3e340477ece2 is in state STARTED 2025-08-29 14:57:26.188986 | orchestrator | 2025-08-29 14:57:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:29.269479 | orchestrator | 2025-08-29 14:57:29 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:57:29.269733 | orchestrator | 2025-08-29 14:57:29 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:57:29.270562 | orchestrator | 2025-08-29 14:57:29 | INFO  | Task 3b6b6575-4bde-4175-880d-3e340477ece2 is in state STARTED 2025-08-29 14:57:29.270589 | orchestrator | 2025-08-29 14:57:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:32.327633 | orchestrator | 2025-08-29 14:57:32 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:57:32.328236 | orchestrator | 2025-08-29 14:57:32 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:57:32.329180 | orchestrator | 2025-08-29 14:57:32 | INFO  | Task 3b6b6575-4bde-4175-880d-3e340477ece2 is in state STARTED 2025-08-29 14:57:32.329214 | orchestrator | 2025-08-29 14:57:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:35.372872 | orchestrator | 2025-08-29 14:57:35 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:57:35.375194 | orchestrator | 2025-08-29 14:57:35 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:57:35.378316 | orchestrator | 2025-08-29 14:57:35 | INFO  | Task 3b6b6575-4bde-4175-880d-3e340477ece2 is in state STARTED 2025-08-29 14:57:35.378415 | orchestrator | 2025-08-29 14:57:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:38.419532 | orchestrator | 2025-08-29 14:57:38 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:57:38.421731 | orchestrator | 2025-08-29 14:57:38 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:57:38.423652 | orchestrator | 2025-08-29 14:57:38 | INFO  | Task 3b6b6575-4bde-4175-880d-3e340477ece2 is in state STARTED 2025-08-29 14:57:38.423694 | orchestrator | 2025-08-29 14:57:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:41.471915 | orchestrator | 2025-08-29 14:57:41 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:57:41.473442 | orchestrator | 2025-08-29 14:57:41 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:57:41.474962 | orchestrator | 2025-08-29 14:57:41 | INFO  | Task 3b6b6575-4bde-4175-880d-3e340477ece2 is in state STARTED 2025-08-29 14:57:41.474992 | orchestrator | 2025-08-29 14:57:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:44.525842 | orchestrator | 2025-08-29 14:57:44 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:57:44.526650 | orchestrator | 2025-08-29 14:57:44 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:57:44.527843 | orchestrator | 2025-08-29 14:57:44 | INFO  | Task 3b6b6575-4bde-4175-880d-3e340477ece2 is in state STARTED 2025-08-29 14:57:44.529001 | orchestrator | 2025-08-29 14:57:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:47.564931 | orchestrator | 2025-08-29 14:57:47 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:57:47.565075 | orchestrator | 2025-08-29 14:57:47 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:57:47.565786 | orchestrator | 2025-08-29 14:57:47 | INFO  | Task 3b6b6575-4bde-4175-880d-3e340477ece2 is in state STARTED 2025-08-29 14:57:47.565814 | orchestrator | 2025-08-29 14:57:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:50.599998 | orchestrator | 2025-08-29 14:57:50 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:57:50.601319 | orchestrator | 2025-08-29 14:57:50 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:57:50.603088 | orchestrator | 2025-08-29 14:57:50 | INFO  | Task 3b6b6575-4bde-4175-880d-3e340477ece2 is in state STARTED 2025-08-29 14:57:50.603230 | orchestrator | 2025-08-29 14:57:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:53.647822 | orchestrator | 2025-08-29 14:57:53 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:57:53.649413 | orchestrator | 2025-08-29 14:57:53 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:57:53.651083 | orchestrator | 2025-08-29 14:57:53 | INFO  | Task 3b6b6575-4bde-4175-880d-3e340477ece2 is in state STARTED 2025-08-29 14:57:53.651123 | orchestrator | 2025-08-29 14:57:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:56.684300 | orchestrator | 2025-08-29 14:57:56 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:57:56.684365 | orchestrator | 2025-08-29 14:57:56 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:57:56.684678 | orchestrator | 2025-08-29 14:57:56 | INFO  | Task 3b6b6575-4bde-4175-880d-3e340477ece2 is in state STARTED 2025-08-29 14:57:56.684692 | orchestrator | 2025-08-29 14:57:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:57:59.728390 | orchestrator | 2025-08-29 14:57:59 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:57:59.728587 | orchestrator | 2025-08-29 14:57:59 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:57:59.729752 | orchestrator | 2025-08-29 14:57:59 | INFO  | Task 3b6b6575-4bde-4175-880d-3e340477ece2 is in state STARTED 2025-08-29 14:57:59.729795 | orchestrator | 2025-08-29 14:57:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:02.776676 | orchestrator | 2025-08-29 14:58:02 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:58:02.777939 | orchestrator | 2025-08-29 14:58:02 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:58:02.780934 | orchestrator | 2025-08-29 14:58:02 | INFO  | Task 3b6b6575-4bde-4175-880d-3e340477ece2 is in state SUCCESS 2025-08-29 14:58:02.783439 | orchestrator | 2025-08-29 14:58:02.783553 | orchestrator | 2025-08-29 14:58:02.783575 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 14:58:02.783588 | orchestrator | 2025-08-29 14:58:02.783599 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 14:58:02.783611 | orchestrator | Friday 29 August 2025 14:55:30 +0000 (0:00:00.176) 0:00:00.176 ********* 2025-08-29 14:58:02.783623 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:58:02.783635 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:58:02.783646 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:58:02.783656 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:58:02.783667 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:58:02.783678 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:58:02.783689 | orchestrator | 2025-08-29 14:58:02.783700 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 14:58:02.783815 | orchestrator | Friday 29 August 2025 14:55:31 +0000 (0:00:00.702) 0:00:00.879 ********* 2025-08-29 14:58:02.783829 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-08-29 14:58:02.783841 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-08-29 14:58:02.783852 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-08-29 14:58:02.783862 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-08-29 14:58:02.783873 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-08-29 14:58:02.783884 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-08-29 14:58:02.783945 | orchestrator | 2025-08-29 14:58:02.783964 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-08-29 14:58:02.783984 | orchestrator | 2025-08-29 14:58:02.784028 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-08-29 14:58:02.784049 | orchestrator | Friday 29 August 2025 14:55:32 +0000 (0:00:01.544) 0:00:02.423 ********* 2025-08-29 14:58:02.784071 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:58:02.784092 | orchestrator | 2025-08-29 14:58:02.784112 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-08-29 14:58:02.784126 | orchestrator | Friday 29 August 2025 14:55:34 +0000 (0:00:01.649) 0:00:04.072 ********* 2025-08-29 14:58:02.784141 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.784157 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.784171 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.784185 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.784199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.784258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.784274 | orchestrator | 2025-08-29 14:58:02.784305 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-08-29 14:58:02.784318 | orchestrator | Friday 29 August 2025 14:55:35 +0000 (0:00:01.325) 0:00:05.398 ********* 2025-08-29 14:58:02.784332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.784356 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.784370 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.784383 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.784395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.784406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.784417 | orchestrator | 2025-08-29 14:58:02.784428 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-08-29 14:58:02.784439 | orchestrator | Friday 29 August 2025 14:55:37 +0000 (0:00:01.740) 0:00:07.138 ********* 2025-08-29 14:58:02.784450 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.784461 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.784494 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.784513 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.784524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.784535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.784546 | orchestrator | 2025-08-29 14:58:02.784557 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-08-29 14:58:02.784568 | orchestrator | Friday 29 August 2025 14:55:38 +0000 (0:00:01.127) 0:00:08.266 ********* 2025-08-29 14:58:02.784579 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.784591 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.784602 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.784614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.784625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.784636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.784652 | orchestrator | 2025-08-29 14:58:02.784674 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-08-29 14:58:02.784686 | orchestrator | Friday 29 August 2025 14:55:40 +0000 (0:00:01.701) 0:00:09.967 ********* 2025-08-29 14:58:02.784698 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.784709 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.784720 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.784731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.784742 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.784753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.784764 | orchestrator | 2025-08-29 14:58:02.784775 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-08-29 14:58:02.784786 | orchestrator | Friday 29 August 2025 14:55:41 +0000 (0:00:01.231) 0:00:11.199 ********* 2025-08-29 14:58:02.784797 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:58:02.784808 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:58:02.784819 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:58:02.784829 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:58:02.784840 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:58:02.784851 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:58:02.784861 | orchestrator | 2025-08-29 14:58:02.784872 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-08-29 14:58:02.784883 | orchestrator | Friday 29 August 2025 14:55:44 +0000 (0:00:03.051) 0:00:14.250 ********* 2025-08-29 14:58:02.784900 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-08-29 14:58:02.784911 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-08-29 14:58:02.784922 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-08-29 14:58:02.784933 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-08-29 14:58:02.784943 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-08-29 14:58:02.784954 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-08-29 14:58:02.784965 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-08-29 14:58:02.784976 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-08-29 14:58:02.785013 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-08-29 14:58:02.785026 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-08-29 14:58:02.785037 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-08-29 14:58:02.785048 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-08-29 14:58:02.785059 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-08-29 14:58:02.785070 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-08-29 14:58:02.785099 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-08-29 14:58:02.785111 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-08-29 14:58:02.785121 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-08-29 14:58:02.785132 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-08-29 14:58:02.785143 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-08-29 14:58:02.785155 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-08-29 14:58:02.785166 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-08-29 14:58:02.785177 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-08-29 14:58:02.785188 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-08-29 14:58:02.785198 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-08-29 14:58:02.785209 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-08-29 14:58:02.785220 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-08-29 14:58:02.785231 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-08-29 14:58:02.785242 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-08-29 14:58:02.785252 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-08-29 14:58:02.785263 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-08-29 14:58:02.785280 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-08-29 14:58:02.785303 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-08-29 14:58:02.785314 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-08-29 14:58:02.785325 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-08-29 14:58:02.785336 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-08-29 14:58:02.785347 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-08-29 14:58:02.785358 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-08-29 14:58:02.785369 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-08-29 14:58:02.785380 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-08-29 14:58:02.785391 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-08-29 14:58:02.785402 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-08-29 14:58:02.785413 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-08-29 14:58:02.785424 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-08-29 14:58:02.785435 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-08-29 14:58:02.785453 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-08-29 14:58:02.785464 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-08-29 14:58:02.785475 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-08-29 14:58:02.785486 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-08-29 14:58:02.785497 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-08-29 14:58:02.785508 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-08-29 14:58:02.785519 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-08-29 14:58:02.785530 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-08-29 14:58:02.785541 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-08-29 14:58:02.785552 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-08-29 14:58:02.785563 | orchestrator | 2025-08-29 14:58:02.785574 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-08-29 14:58:02.785585 | orchestrator | Friday 29 August 2025 14:56:03 +0000 (0:00:18.570) 0:00:32.821 ********* 2025-08-29 14:58:02.785596 | orchestrator | 2025-08-29 14:58:02.785607 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-08-29 14:58:02.785618 | orchestrator | Friday 29 August 2025 14:56:03 +0000 (0:00:00.408) 0:00:33.229 ********* 2025-08-29 14:58:02.785635 | orchestrator | 2025-08-29 14:58:02.785646 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-08-29 14:58:02.785657 | orchestrator | Friday 29 August 2025 14:56:03 +0000 (0:00:00.065) 0:00:33.295 ********* 2025-08-29 14:58:02.785668 | orchestrator | 2025-08-29 14:58:02.785679 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-08-29 14:58:02.785690 | orchestrator | Friday 29 August 2025 14:56:03 +0000 (0:00:00.064) 0:00:33.359 ********* 2025-08-29 14:58:02.785700 | orchestrator | 2025-08-29 14:58:02.785711 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-08-29 14:58:02.785722 | orchestrator | Friday 29 August 2025 14:56:03 +0000 (0:00:00.063) 0:00:33.423 ********* 2025-08-29 14:58:02.785733 | orchestrator | 2025-08-29 14:58:02.785744 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-08-29 14:58:02.785755 | orchestrator | Friday 29 August 2025 14:56:03 +0000 (0:00:00.075) 0:00:33.498 ********* 2025-08-29 14:58:02.785766 | orchestrator | 2025-08-29 14:58:02.785776 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-08-29 14:58:02.785818 | orchestrator | Friday 29 August 2025 14:56:04 +0000 (0:00:00.103) 0:00:33.601 ********* 2025-08-29 14:58:02.785830 | orchestrator | ok: [testbed-node-4] 2025-08-29 14:58:02.785841 | orchestrator | ok: [testbed-node-5] 2025-08-29 14:58:02.785852 | orchestrator | ok: [testbed-node-3] 2025-08-29 14:58:02.785863 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:58:02.785874 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:58:02.785885 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:58:02.785896 | orchestrator | 2025-08-29 14:58:02.785907 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-08-29 14:58:02.785918 | orchestrator | Friday 29 August 2025 14:56:05 +0000 (0:00:01.967) 0:00:35.568 ********* 2025-08-29 14:58:02.785929 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:58:02.785939 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:58:02.785951 | orchestrator | changed: [testbed-node-3] 2025-08-29 14:58:02.785962 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:58:02.785972 | orchestrator | changed: [testbed-node-5] 2025-08-29 14:58:02.785983 | orchestrator | changed: [testbed-node-4] 2025-08-29 14:58:02.786123 | orchestrator | 2025-08-29 14:58:02.786143 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-08-29 14:58:02.786155 | orchestrator | 2025-08-29 14:58:02.786166 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-08-29 14:58:02.786177 | orchestrator | Friday 29 August 2025 14:56:40 +0000 (0:00:34.657) 0:01:10.226 ********* 2025-08-29 14:58:02.786188 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:58:02.786199 | orchestrator | 2025-08-29 14:58:02.786210 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-08-29 14:58:02.786221 | orchestrator | Friday 29 August 2025 14:56:41 +0000 (0:00:00.754) 0:01:10.980 ********* 2025-08-29 14:58:02.786232 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:58:02.786243 | orchestrator | 2025-08-29 14:58:02.786254 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-08-29 14:58:02.786265 | orchestrator | Friday 29 August 2025 14:56:41 +0000 (0:00:00.543) 0:01:11.524 ********* 2025-08-29 14:58:02.786276 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:58:02.786287 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:58:02.786297 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:58:02.786308 | orchestrator | 2025-08-29 14:58:02.786319 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-08-29 14:58:02.786331 | orchestrator | Friday 29 August 2025 14:56:42 +0000 (0:00:01.036) 0:01:12.561 ********* 2025-08-29 14:58:02.786342 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:58:02.786353 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:58:02.786384 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:58:02.786405 | orchestrator | 2025-08-29 14:58:02.786417 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-08-29 14:58:02.786428 | orchestrator | Friday 29 August 2025 14:56:43 +0000 (0:00:00.395) 0:01:12.956 ********* 2025-08-29 14:58:02.786439 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:58:02.786449 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:58:02.786460 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:58:02.786471 | orchestrator | 2025-08-29 14:58:02.786482 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-08-29 14:58:02.786493 | orchestrator | Friday 29 August 2025 14:56:43 +0000 (0:00:00.355) 0:01:13.312 ********* 2025-08-29 14:58:02.786503 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:58:02.786514 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:58:02.786525 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:58:02.786536 | orchestrator | 2025-08-29 14:58:02.786547 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-08-29 14:58:02.786558 | orchestrator | Friday 29 August 2025 14:56:44 +0000 (0:00:00.349) 0:01:13.661 ********* 2025-08-29 14:58:02.786569 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:58:02.786580 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:58:02.786590 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:58:02.786601 | orchestrator | 2025-08-29 14:58:02.786612 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-08-29 14:58:02.786623 | orchestrator | Friday 29 August 2025 14:56:44 +0000 (0:00:00.625) 0:01:14.287 ********* 2025-08-29 14:58:02.786634 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:02.786645 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:02.786656 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:02.786667 | orchestrator | 2025-08-29 14:58:02.786678 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-08-29 14:58:02.786689 | orchestrator | Friday 29 August 2025 14:56:45 +0000 (0:00:00.547) 0:01:14.834 ********* 2025-08-29 14:58:02.786699 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:02.786710 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:02.786721 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:02.786732 | orchestrator | 2025-08-29 14:58:02.786743 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-08-29 14:58:02.786754 | orchestrator | Friday 29 August 2025 14:56:45 +0000 (0:00:00.295) 0:01:15.129 ********* 2025-08-29 14:58:02.786765 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:02.786775 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:02.786786 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:02.786797 | orchestrator | 2025-08-29 14:58:02.786808 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-08-29 14:58:02.786819 | orchestrator | Friday 29 August 2025 14:56:45 +0000 (0:00:00.301) 0:01:15.431 ********* 2025-08-29 14:58:02.786830 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:02.786841 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:02.786851 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:02.786862 | orchestrator | 2025-08-29 14:58:02.786873 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-08-29 14:58:02.786884 | orchestrator | Friday 29 August 2025 14:56:46 +0000 (0:00:00.541) 0:01:15.972 ********* 2025-08-29 14:58:02.786895 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:02.786905 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:02.786916 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:02.786927 | orchestrator | 2025-08-29 14:58:02.786938 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-08-29 14:58:02.786949 | orchestrator | Friday 29 August 2025 14:56:46 +0000 (0:00:00.349) 0:01:16.322 ********* 2025-08-29 14:58:02.786959 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:02.787016 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:02.787029 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:02.787040 | orchestrator | 2025-08-29 14:58:02.787058 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-08-29 14:58:02.787069 | orchestrator | Friday 29 August 2025 14:56:47 +0000 (0:00:00.323) 0:01:16.646 ********* 2025-08-29 14:58:02.787080 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:02.787091 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:02.787102 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:02.787112 | orchestrator | 2025-08-29 14:58:02.787123 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-08-29 14:58:02.787134 | orchestrator | Friday 29 August 2025 14:56:47 +0000 (0:00:00.323) 0:01:16.969 ********* 2025-08-29 14:58:02.787145 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:02.787156 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:02.787167 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:02.787177 | orchestrator | 2025-08-29 14:58:02.787188 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-08-29 14:58:02.787199 | orchestrator | Friday 29 August 2025 14:56:47 +0000 (0:00:00.311) 0:01:17.281 ********* 2025-08-29 14:58:02.787210 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:02.787220 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:02.787231 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:02.787242 | orchestrator | 2025-08-29 14:58:02.787253 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-08-29 14:58:02.787264 | orchestrator | Friday 29 August 2025 14:56:48 +0000 (0:00:00.530) 0:01:17.811 ********* 2025-08-29 14:58:02.787274 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:02.787285 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:02.787296 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:02.787307 | orchestrator | 2025-08-29 14:58:02.787318 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-08-29 14:58:02.787329 | orchestrator | Friday 29 August 2025 14:56:48 +0000 (0:00:00.354) 0:01:18.166 ********* 2025-08-29 14:58:02.787340 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:02.787350 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:02.787361 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:02.787372 | orchestrator | 2025-08-29 14:58:02.787383 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-08-29 14:58:02.787393 | orchestrator | Friday 29 August 2025 14:56:48 +0000 (0:00:00.304) 0:01:18.470 ********* 2025-08-29 14:58:02.787405 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:02.787416 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:02.787438 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:02.787450 | orchestrator | 2025-08-29 14:58:02.787461 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-08-29 14:58:02.787472 | orchestrator | Friday 29 August 2025 14:56:49 +0000 (0:00:00.320) 0:01:18.791 ********* 2025-08-29 14:58:02.787483 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 14:58:02.787494 | orchestrator | 2025-08-29 14:58:02.787504 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-08-29 14:58:02.787515 | orchestrator | Friday 29 August 2025 14:56:50 +0000 (0:00:00.818) 0:01:19.609 ********* 2025-08-29 14:58:02.787526 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:58:02.787537 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:58:02.787548 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:58:02.787559 | orchestrator | 2025-08-29 14:58:02.787570 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-08-29 14:58:02.787581 | orchestrator | Friday 29 August 2025 14:56:50 +0000 (0:00:00.456) 0:01:20.065 ********* 2025-08-29 14:58:02.787592 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:58:02.787602 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:58:02.787613 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:58:02.787624 | orchestrator | 2025-08-29 14:58:02.787635 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-08-29 14:58:02.787646 | orchestrator | Friday 29 August 2025 14:56:50 +0000 (0:00:00.435) 0:01:20.501 ********* 2025-08-29 14:58:02.787663 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:02.787674 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:02.787685 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:02.787696 | orchestrator | 2025-08-29 14:58:02.787707 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-08-29 14:58:02.787718 | orchestrator | Friday 29 August 2025 14:56:51 +0000 (0:00:00.538) 0:01:21.039 ********* 2025-08-29 14:58:02.787728 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:02.787739 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:02.787750 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:02.787761 | orchestrator | 2025-08-29 14:58:02.787772 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-08-29 14:58:02.787783 | orchestrator | Friday 29 August 2025 14:56:51 +0000 (0:00:00.342) 0:01:21.381 ********* 2025-08-29 14:58:02.787793 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:02.787804 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:02.787815 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:02.787825 | orchestrator | 2025-08-29 14:58:02.787836 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-08-29 14:58:02.787847 | orchestrator | Friday 29 August 2025 14:56:52 +0000 (0:00:00.358) 0:01:21.740 ********* 2025-08-29 14:58:02.787858 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:02.787869 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:02.787880 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:02.787890 | orchestrator | 2025-08-29 14:58:02.787901 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-08-29 14:58:02.787912 | orchestrator | Friday 29 August 2025 14:56:52 +0000 (0:00:00.366) 0:01:22.107 ********* 2025-08-29 14:58:02.787923 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:02.787934 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:02.787945 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:02.787956 | orchestrator | 2025-08-29 14:58:02.787966 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-08-29 14:58:02.787977 | orchestrator | Friday 29 August 2025 14:56:53 +0000 (0:00:00.543) 0:01:22.650 ********* 2025-08-29 14:58:02.787988 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:02.788015 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:02.788026 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:02.788037 | orchestrator | 2025-08-29 14:58:02.788048 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-08-29 14:58:02.788059 | orchestrator | Friday 29 August 2025 14:56:53 +0000 (0:00:00.377) 0:01:23.027 ********* 2025-08-29 14:58:02.788071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.788085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.788096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.788119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.788139 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.788152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.788163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.788175 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.788186 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.788198 | orchestrator | 2025-08-29 14:58:02.788209 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-08-29 14:58:02.788220 | orchestrator | Friday 29 August 2025 14:56:55 +0000 (0:00:01.691) 0:01:24.719 ********* 2025-08-29 14:58:02.788232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.788243 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.788255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.788266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.788304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.788316 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.788328 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.788339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.788351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.788362 | orchestrator | 2025-08-29 14:58:02.788373 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-08-29 14:58:02.788384 | orchestrator | Friday 29 August 2025 14:56:59 +0000 (0:00:04.865) 0:01:29.585 ********* 2025-08-29 14:58:02.788395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.788407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.788418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.788430 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.788447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.788469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.788481 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.788493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.788504 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.788515 | orchestrator | 2025-08-29 14:58:02.788527 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-08-29 14:58:02.788538 | orchestrator | Friday 29 August 2025 14:57:01 +0000 (0:00:01.955) 0:01:31.540 ********* 2025-08-29 14:58:02.788549 | orchestrator | 2025-08-29 14:58:02.788560 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-08-29 14:58:02.788571 | orchestrator | Friday 29 August 2025 14:57:02 +0000 (0:00:00.278) 0:01:31.819 ********* 2025-08-29 14:58:02.788582 | orchestrator | 2025-08-29 14:58:02.788593 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-08-29 14:58:02.788606 | orchestrator | Friday 29 August 2025 14:57:02 +0000 (0:00:00.070) 0:01:31.890 ********* 2025-08-29 14:58:02.788626 | orchestrator | 2025-08-29 14:58:02.788645 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-08-29 14:58:02.788673 | orchestrator | Friday 29 August 2025 14:57:02 +0000 (0:00:00.072) 0:01:31.962 ********* 2025-08-29 14:58:02.788697 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:58:02.788714 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:58:02.788732 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:58:02.788749 | orchestrator | 2025-08-29 14:58:02.788766 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-08-29 14:58:02.788784 | orchestrator | Friday 29 August 2025 14:57:04 +0000 (0:00:02.317) 0:01:34.279 ********* 2025-08-29 14:58:02.788801 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:58:02.788821 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:58:02.788841 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:58:02.788860 | orchestrator | 2025-08-29 14:58:02.788880 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-08-29 14:58:02.788912 | orchestrator | Friday 29 August 2025 14:57:12 +0000 (0:00:07.667) 0:01:41.946 ********* 2025-08-29 14:58:02.788924 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:58:02.788935 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:58:02.788945 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:58:02.788956 | orchestrator | 2025-08-29 14:58:02.788967 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-08-29 14:58:02.788977 | orchestrator | Friday 29 August 2025 14:57:20 +0000 (0:00:07.669) 0:01:49.616 ********* 2025-08-29 14:58:02.788988 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:02.789030 | orchestrator | 2025-08-29 14:58:02.789050 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-08-29 14:58:02.789066 | orchestrator | Friday 29 August 2025 14:57:20 +0000 (0:00:00.154) 0:01:49.771 ********* 2025-08-29 14:58:02.789077 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:58:02.789088 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:58:02.789098 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:58:02.789109 | orchestrator | 2025-08-29 14:58:02.789120 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-08-29 14:58:02.789131 | orchestrator | Friday 29 August 2025 14:57:21 +0000 (0:00:01.272) 0:01:51.043 ********* 2025-08-29 14:58:02.789141 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:02.789152 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:02.789162 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:58:02.789173 | orchestrator | 2025-08-29 14:58:02.789184 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-08-29 14:58:02.789194 | orchestrator | Friday 29 August 2025 14:57:22 +0000 (0:00:00.598) 0:01:51.642 ********* 2025-08-29 14:58:02.789205 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:58:02.789216 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:58:02.789226 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:58:02.789237 | orchestrator | 2025-08-29 14:58:02.789248 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-08-29 14:58:02.789259 | orchestrator | Friday 29 August 2025 14:57:22 +0000 (0:00:00.771) 0:01:52.413 ********* 2025-08-29 14:58:02.789269 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:02.789280 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:02.789291 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:58:02.789302 | orchestrator | 2025-08-29 14:58:02.789313 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-08-29 14:58:02.789324 | orchestrator | Friday 29 August 2025 14:57:23 +0000 (0:00:00.634) 0:01:53.048 ********* 2025-08-29 14:58:02.789334 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:58:02.789351 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:58:02.789372 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:58:02.789383 | orchestrator | 2025-08-29 14:58:02.789394 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-08-29 14:58:02.789405 | orchestrator | Friday 29 August 2025 14:57:24 +0000 (0:00:00.965) 0:01:54.013 ********* 2025-08-29 14:58:02.789415 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:58:02.789426 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:58:02.789437 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:58:02.789448 | orchestrator | 2025-08-29 14:58:02.789458 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-08-29 14:58:02.789469 | orchestrator | Friday 29 August 2025 14:57:25 +0000 (0:00:00.777) 0:01:54.791 ********* 2025-08-29 14:58:02.789480 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:58:02.789490 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:58:02.789501 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:58:02.789512 | orchestrator | 2025-08-29 14:58:02.789522 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-08-29 14:58:02.789533 | orchestrator | Friday 29 August 2025 14:57:25 +0000 (0:00:00.313) 0:01:55.105 ********* 2025-08-29 14:58:02.789545 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.789564 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.789575 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.789587 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.789599 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.789610 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.789621 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.789632 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.789654 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.789666 | orchestrator | 2025-08-29 14:58:02.789677 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-08-29 14:58:02.789688 | orchestrator | Friday 29 August 2025 14:57:26 +0000 (0:00:01.321) 0:01:56.426 ********* 2025-08-29 14:58:02.789700 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.789717 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.789729 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.789740 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.789751 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.789762 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.789773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.789785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.789796 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.789807 | orchestrator | 2025-08-29 14:58:02.789825 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-08-29 14:58:02.789843 | orchestrator | Friday 29 August 2025 14:57:32 +0000 (0:00:05.876) 0:02:02.303 ********* 2025-08-29 14:58:02.789879 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.789912 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.789933 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.789953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.789971 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.789982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.790051 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.790068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.790079 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 14:58:02.790090 | orchestrator | 2025-08-29 14:58:02.790101 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-08-29 14:58:02.790112 | orchestrator | Friday 29 August 2025 14:57:35 +0000 (0:00:02.808) 0:02:05.111 ********* 2025-08-29 14:58:02.790123 | orchestrator | 2025-08-29 14:58:02.790134 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-08-29 14:58:02.790144 | orchestrator | Friday 29 August 2025 14:57:35 +0000 (0:00:00.081) 0:02:05.193 ********* 2025-08-29 14:58:02.790155 | orchestrator | 2025-08-29 14:58:02.790166 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-08-29 14:58:02.790184 | orchestrator | Friday 29 August 2025 14:57:35 +0000 (0:00:00.076) 0:02:05.270 ********* 2025-08-29 14:58:02.790195 | orchestrator | 2025-08-29 14:58:02.790206 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-08-29 14:58:02.790217 | orchestrator | Friday 29 August 2025 14:57:35 +0000 (0:00:00.106) 0:02:05.376 ********* 2025-08-29 14:58:02.790228 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:58:02.790239 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:58:02.790250 | orchestrator | 2025-08-29 14:58:02.790276 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-08-29 14:58:02.790288 | orchestrator | Friday 29 August 2025 14:57:42 +0000 (0:00:06.256) 0:02:11.632 ********* 2025-08-29 14:58:02.790299 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:58:02.790310 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:58:02.790321 | orchestrator | 2025-08-29 14:58:02.790332 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-08-29 14:58:02.790342 | orchestrator | Friday 29 August 2025 14:57:48 +0000 (0:00:06.304) 0:02:17.937 ********* 2025-08-29 14:58:02.790353 | orchestrator | changed: [testbed-node-1] 2025-08-29 14:58:02.790364 | orchestrator | changed: [testbed-node-2] 2025-08-29 14:58:02.790375 | orchestrator | 2025-08-29 14:58:02.790386 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-08-29 14:58:02.790397 | orchestrator | Friday 29 August 2025 14:57:54 +0000 (0:00:06.413) 0:02:24.351 ********* 2025-08-29 14:58:02.790408 | orchestrator | skipping: [testbed-node-0] 2025-08-29 14:58:02.790418 | orchestrator | 2025-08-29 14:58:02.790429 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-08-29 14:58:02.790440 | orchestrator | Friday 29 August 2025 14:57:54 +0000 (0:00:00.132) 0:02:24.483 ********* 2025-08-29 14:58:02.790451 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:58:02.790462 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:58:02.790473 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:58:02.790484 | orchestrator | 2025-08-29 14:58:02.790495 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-08-29 14:58:02.790506 | orchestrator | Friday 29 August 2025 14:57:55 +0000 (0:00:00.816) 0:02:25.300 ********* 2025-08-29 14:58:02.790517 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:02.790528 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:02.790539 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:58:02.790550 | orchestrator | 2025-08-29 14:58:02.790562 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-08-29 14:58:02.790573 | orchestrator | Friday 29 August 2025 14:57:56 +0000 (0:00:00.590) 0:02:25.891 ********* 2025-08-29 14:58:02.790584 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:58:02.790595 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:58:02.790605 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:58:02.790616 | orchestrator | 2025-08-29 14:58:02.790628 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-08-29 14:58:02.790639 | orchestrator | Friday 29 August 2025 14:57:57 +0000 (0:00:00.804) 0:02:26.696 ********* 2025-08-29 14:58:02.790650 | orchestrator | skipping: [testbed-node-1] 2025-08-29 14:58:02.790661 | orchestrator | skipping: [testbed-node-2] 2025-08-29 14:58:02.790671 | orchestrator | changed: [testbed-node-0] 2025-08-29 14:58:02.790683 | orchestrator | 2025-08-29 14:58:02.790693 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-08-29 14:58:02.790704 | orchestrator | Friday 29 August 2025 14:57:57 +0000 (0:00:00.871) 0:02:27.567 ********* 2025-08-29 14:58:02.790715 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:58:02.790726 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:58:02.790737 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:58:02.790748 | orchestrator | 2025-08-29 14:58:02.790759 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-08-29 14:58:02.790770 | orchestrator | Friday 29 August 2025 14:57:58 +0000 (0:00:00.719) 0:02:28.287 ********* 2025-08-29 14:58:02.790787 | orchestrator | ok: [testbed-node-0] 2025-08-29 14:58:02.790798 | orchestrator | ok: [testbed-node-1] 2025-08-29 14:58:02.790809 | orchestrator | ok: [testbed-node-2] 2025-08-29 14:58:02.790819 | orchestrator | 2025-08-29 14:58:02.790830 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 14:58:02.790842 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-08-29 14:58:02.790853 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-08-29 14:58:02.790865 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-08-29 14:58:02.790876 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:58:02.790887 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:58:02.790898 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 14:58:02.790908 | orchestrator | 2025-08-29 14:58:02.790919 | orchestrator | 2025-08-29 14:58:02.790931 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 14:58:02.790942 | orchestrator | Friday 29 August 2025 14:57:59 +0000 (0:00:00.901) 0:02:29.188 ********* 2025-08-29 14:58:02.790952 | orchestrator | =============================================================================== 2025-08-29 14:58:02.790963 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 34.66s 2025-08-29 14:58:02.790974 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 18.57s 2025-08-29 14:58:02.790985 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 14.08s 2025-08-29 14:58:02.791013 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 13.97s 2025-08-29 14:58:02.791024 | orchestrator | ovn-db : Restart ovn-nb-db container ------------------------------------ 8.57s 2025-08-29 14:58:02.791035 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 5.88s 2025-08-29 14:58:02.791046 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.87s 2025-08-29 14:58:02.791069 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.05s 2025-08-29 14:58:02.791081 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.81s 2025-08-29 14:58:02.791092 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.97s 2025-08-29 14:58:02.791103 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 1.96s 2025-08-29 14:58:02.791114 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.74s 2025-08-29 14:58:02.791125 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.70s 2025-08-29 14:58:02.791136 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.69s 2025-08-29 14:58:02.791147 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.65s 2025-08-29 14:58:02.791158 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.54s 2025-08-29 14:58:02.791169 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.33s 2025-08-29 14:58:02.791180 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.32s 2025-08-29 14:58:02.791191 | orchestrator | ovn-db : Get OVN_Northbound cluster leader ------------------------------ 1.27s 2025-08-29 14:58:02.791202 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.23s 2025-08-29 14:58:02.791213 | orchestrator | 2025-08-29 14:58:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:05.838209 | orchestrator | 2025-08-29 14:58:05 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:58:05.840099 | orchestrator | 2025-08-29 14:58:05 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:58:05.840366 | orchestrator | 2025-08-29 14:58:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:08.883313 | orchestrator | 2025-08-29 14:58:08 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:58:08.883394 | orchestrator | 2025-08-29 14:58:08 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:58:08.883401 | orchestrator | 2025-08-29 14:58:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:11.930157 | orchestrator | 2025-08-29 14:58:11 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:58:11.932423 | orchestrator | 2025-08-29 14:58:11 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:58:11.932457 | orchestrator | 2025-08-29 14:58:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:14.978657 | orchestrator | 2025-08-29 14:58:14 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:58:14.980123 | orchestrator | 2025-08-29 14:58:14 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:58:14.980524 | orchestrator | 2025-08-29 14:58:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:18.016648 | orchestrator | 2025-08-29 14:58:18 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:58:18.018425 | orchestrator | 2025-08-29 14:58:18 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:58:18.018463 | orchestrator | 2025-08-29 14:58:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:21.059133 | orchestrator | 2025-08-29 14:58:21 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:58:21.060567 | orchestrator | 2025-08-29 14:58:21 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:58:21.060620 | orchestrator | 2025-08-29 14:58:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:24.102654 | orchestrator | 2025-08-29 14:58:24 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:58:24.102736 | orchestrator | 2025-08-29 14:58:24 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:58:24.104209 | orchestrator | 2025-08-29 14:58:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:27.136911 | orchestrator | 2025-08-29 14:58:27 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:58:27.137705 | orchestrator | 2025-08-29 14:58:27 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:58:27.137754 | orchestrator | 2025-08-29 14:58:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:30.179747 | orchestrator | 2025-08-29 14:58:30 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:58:30.182366 | orchestrator | 2025-08-29 14:58:30 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:58:30.182432 | orchestrator | 2025-08-29 14:58:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:33.223364 | orchestrator | 2025-08-29 14:58:33 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:58:33.223451 | orchestrator | 2025-08-29 14:58:33 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:58:33.225491 | orchestrator | 2025-08-29 14:58:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:36.263998 | orchestrator | 2025-08-29 14:58:36 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:58:36.264099 | orchestrator | 2025-08-29 14:58:36 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:58:36.264122 | orchestrator | 2025-08-29 14:58:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:39.299007 | orchestrator | 2025-08-29 14:58:39 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:58:39.299099 | orchestrator | 2025-08-29 14:58:39 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:58:39.299114 | orchestrator | 2025-08-29 14:58:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:42.351346 | orchestrator | 2025-08-29 14:58:42 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:58:42.355345 | orchestrator | 2025-08-29 14:58:42 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:58:42.355424 | orchestrator | 2025-08-29 14:58:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:45.420758 | orchestrator | 2025-08-29 14:58:45 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:58:45.424888 | orchestrator | 2025-08-29 14:58:45 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:58:45.426226 | orchestrator | 2025-08-29 14:58:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:48.463931 | orchestrator | 2025-08-29 14:58:48 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:58:48.465264 | orchestrator | 2025-08-29 14:58:48 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:58:48.465330 | orchestrator | 2025-08-29 14:58:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:51.514125 | orchestrator | 2025-08-29 14:58:51 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:58:51.514232 | orchestrator | 2025-08-29 14:58:51 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:58:51.514247 | orchestrator | 2025-08-29 14:58:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:54.545914 | orchestrator | 2025-08-29 14:58:54 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:58:54.547195 | orchestrator | 2025-08-29 14:58:54 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:58:54.547248 | orchestrator | 2025-08-29 14:58:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:58:57.594216 | orchestrator | 2025-08-29 14:58:57 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:58:57.596423 | orchestrator | 2025-08-29 14:58:57 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:58:57.596510 | orchestrator | 2025-08-29 14:58:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:00.638598 | orchestrator | 2025-08-29 14:59:00 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:59:00.639709 | orchestrator | 2025-08-29 14:59:00 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:59:00.639755 | orchestrator | 2025-08-29 14:59:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:03.694799 | orchestrator | 2025-08-29 14:59:03 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:59:03.696547 | orchestrator | 2025-08-29 14:59:03 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:59:03.696850 | orchestrator | 2025-08-29 14:59:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:06.744626 | orchestrator | 2025-08-29 14:59:06 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:59:06.746878 | orchestrator | 2025-08-29 14:59:06 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:59:06.747263 | orchestrator | 2025-08-29 14:59:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:09.790546 | orchestrator | 2025-08-29 14:59:09 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:59:09.792407 | orchestrator | 2025-08-29 14:59:09 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:59:09.792460 | orchestrator | 2025-08-29 14:59:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:12.837337 | orchestrator | 2025-08-29 14:59:12 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:59:12.839432 | orchestrator | 2025-08-29 14:59:12 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:59:12.839469 | orchestrator | 2025-08-29 14:59:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:15.870741 | orchestrator | 2025-08-29 14:59:15 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:59:15.872238 | orchestrator | 2025-08-29 14:59:15 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:59:15.872277 | orchestrator | 2025-08-29 14:59:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:18.925622 | orchestrator | 2025-08-29 14:59:18 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:59:18.927725 | orchestrator | 2025-08-29 14:59:18 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:59:18.927745 | orchestrator | 2025-08-29 14:59:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:21.966376 | orchestrator | 2025-08-29 14:59:21 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:59:21.966542 | orchestrator | 2025-08-29 14:59:21 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:59:21.968868 | orchestrator | 2025-08-29 14:59:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:25.015692 | orchestrator | 2025-08-29 14:59:25 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:59:25.016223 | orchestrator | 2025-08-29 14:59:25 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:59:25.016252 | orchestrator | 2025-08-29 14:59:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:28.062095 | orchestrator | 2025-08-29 14:59:28 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:59:28.064280 | orchestrator | 2025-08-29 14:59:28 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:59:28.064578 | orchestrator | 2025-08-29 14:59:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:31.109999 | orchestrator | 2025-08-29 14:59:31 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:59:31.112872 | orchestrator | 2025-08-29 14:59:31 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:59:31.113546 | orchestrator | 2025-08-29 14:59:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:34.160193 | orchestrator | 2025-08-29 14:59:34 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:59:34.161490 | orchestrator | 2025-08-29 14:59:34 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:59:34.161539 | orchestrator | 2025-08-29 14:59:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:37.205658 | orchestrator | 2025-08-29 14:59:37 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:59:37.207875 | orchestrator | 2025-08-29 14:59:37 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:59:37.208036 | orchestrator | 2025-08-29 14:59:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:40.249766 | orchestrator | 2025-08-29 14:59:40 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:59:40.250129 | orchestrator | 2025-08-29 14:59:40 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:59:40.251351 | orchestrator | 2025-08-29 14:59:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:43.284400 | orchestrator | 2025-08-29 14:59:43 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:59:43.286432 | orchestrator | 2025-08-29 14:59:43 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:59:43.287201 | orchestrator | 2025-08-29 14:59:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:46.337631 | orchestrator | 2025-08-29 14:59:46 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:59:46.338601 | orchestrator | 2025-08-29 14:59:46 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:59:46.338659 | orchestrator | 2025-08-29 14:59:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:49.387288 | orchestrator | 2025-08-29 14:59:49 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:59:49.388752 | orchestrator | 2025-08-29 14:59:49 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:59:49.389420 | orchestrator | 2025-08-29 14:59:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:52.440135 | orchestrator | 2025-08-29 14:59:52 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:59:52.445650 | orchestrator | 2025-08-29 14:59:52 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:59:52.445722 | orchestrator | 2025-08-29 14:59:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:55.482471 | orchestrator | 2025-08-29 14:59:55 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:59:55.482848 | orchestrator | 2025-08-29 14:59:55 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:59:55.482865 | orchestrator | 2025-08-29 14:59:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 14:59:58.529180 | orchestrator | 2025-08-29 14:59:58 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 14:59:58.530438 | orchestrator | 2025-08-29 14:59:58 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 14:59:58.530649 | orchestrator | 2025-08-29 14:59:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:01.571699 | orchestrator | 2025-08-29 15:00:01 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 15:00:01.571787 | orchestrator | 2025-08-29 15:00:01 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 15:00:01.571822 | orchestrator | 2025-08-29 15:00:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:04.618336 | orchestrator | 2025-08-29 15:00:04 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 15:00:04.620410 | orchestrator | 2025-08-29 15:00:04 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 15:00:04.620481 | orchestrator | 2025-08-29 15:00:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:07.676043 | orchestrator | 2025-08-29 15:00:07 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 15:00:07.676876 | orchestrator | 2025-08-29 15:00:07 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 15:00:07.676920 | orchestrator | 2025-08-29 15:00:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:10.720651 | orchestrator | 2025-08-29 15:00:10 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 15:00:10.722807 | orchestrator | 2025-08-29 15:00:10 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 15:00:10.723238 | orchestrator | 2025-08-29 15:00:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:13.758970 | orchestrator | 2025-08-29 15:00:13 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 15:00:13.759069 | orchestrator | 2025-08-29 15:00:13 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 15:00:13.759084 | orchestrator | 2025-08-29 15:00:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:16.805491 | orchestrator | 2025-08-29 15:00:16 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 15:00:16.807603 | orchestrator | 2025-08-29 15:00:16 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 15:00:16.807690 | orchestrator | 2025-08-29 15:00:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:19.856678 | orchestrator | 2025-08-29 15:00:19 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 15:00:19.857132 | orchestrator | 2025-08-29 15:00:19 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 15:00:19.857279 | orchestrator | 2025-08-29 15:00:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:22.910617 | orchestrator | 2025-08-29 15:00:22 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 15:00:22.912495 | orchestrator | 2025-08-29 15:00:22 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 15:00:22.912550 | orchestrator | 2025-08-29 15:00:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:25.953716 | orchestrator | 2025-08-29 15:00:25 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 15:00:25.955483 | orchestrator | 2025-08-29 15:00:25 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 15:00:25.955534 | orchestrator | 2025-08-29 15:00:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:28.992965 | orchestrator | 2025-08-29 15:00:28 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 15:00:28.996190 | orchestrator | 2025-08-29 15:00:28 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 15:00:28.996266 | orchestrator | 2025-08-29 15:00:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:32.052221 | orchestrator | 2025-08-29 15:00:32 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 15:00:32.054006 | orchestrator | 2025-08-29 15:00:32 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 15:00:32.054122 | orchestrator | 2025-08-29 15:00:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:35.097137 | orchestrator | 2025-08-29 15:00:35 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 15:00:35.099186 | orchestrator | 2025-08-29 15:00:35 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 15:00:35.099350 | orchestrator | 2025-08-29 15:00:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:38.145644 | orchestrator | 2025-08-29 15:00:38 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 15:00:38.147971 | orchestrator | 2025-08-29 15:00:38 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 15:00:38.148018 | orchestrator | 2025-08-29 15:00:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:41.200470 | orchestrator | 2025-08-29 15:00:41 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 15:00:41.202468 | orchestrator | 2025-08-29 15:00:41 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 15:00:41.202518 | orchestrator | 2025-08-29 15:00:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:44.247348 | orchestrator | 2025-08-29 15:00:44 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 15:00:44.248409 | orchestrator | 2025-08-29 15:00:44 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 15:00:44.248487 | orchestrator | 2025-08-29 15:00:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:47.285273 | orchestrator | 2025-08-29 15:00:47 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 15:00:47.285557 | orchestrator | 2025-08-29 15:00:47 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 15:00:47.285604 | orchestrator | 2025-08-29 15:00:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:50.324597 | orchestrator | 2025-08-29 15:00:50 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 15:00:50.326733 | orchestrator | 2025-08-29 15:00:50 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state STARTED 2025-08-29 15:00:50.327746 | orchestrator | 2025-08-29 15:00:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:53.374313 | orchestrator | 2025-08-29 15:00:53 | INFO  | Task f9532f67-2c48-40a8-983e-5cb9fdd5a371 is in state STARTED 2025-08-29 15:00:53.375278 | orchestrator | 2025-08-29 15:00:53 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 15:00:53.385722 | orchestrator | 2025-08-29 15:00:53 | INFO  | Task 67e5682e-3c85-42eb-9260-a60545352383 is in state SUCCESS 2025-08-29 15:00:53.387302 | orchestrator | 2025-08-29 15:00:53.387427 | orchestrator | 2025-08-29 15:00:53.387455 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:00:53.387522 | orchestrator | 2025-08-29 15:00:53.387647 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:00:53.387662 | orchestrator | Friday 29 August 2025 14:54:12 +0000 (0:00:00.340) 0:00:00.340 ********* 2025-08-29 15:00:53.387986 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:53.388070 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:53.388085 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:53.388158 | orchestrator | 2025-08-29 15:00:53.388204 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:00:53.388287 | orchestrator | Friday 29 August 2025 14:54:12 +0000 (0:00:00.317) 0:00:00.658 ********* 2025-08-29 15:00:53.388362 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-08-29 15:00:53.388386 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-08-29 15:00:53.388405 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-08-29 15:00:53.388425 | orchestrator | 2025-08-29 15:00:53.388443 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-08-29 15:00:53.388461 | orchestrator | 2025-08-29 15:00:53.388478 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-08-29 15:00:53.388496 | orchestrator | Friday 29 August 2025 14:54:13 +0000 (0:00:00.921) 0:00:01.579 ********* 2025-08-29 15:00:53.388514 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:53.388533 | orchestrator | 2025-08-29 15:00:53.388553 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-08-29 15:00:53.388572 | orchestrator | Friday 29 August 2025 14:54:14 +0000 (0:00:00.528) 0:00:02.107 ********* 2025-08-29 15:00:53.388590 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:53.388607 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:53.388624 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:53.388641 | orchestrator | 2025-08-29 15:00:53.388660 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-08-29 15:00:53.388678 | orchestrator | Friday 29 August 2025 14:54:15 +0000 (0:00:00.832) 0:00:02.940 ********* 2025-08-29 15:00:53.388697 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:53.388716 | orchestrator | 2025-08-29 15:00:53.388735 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-08-29 15:00:53.388755 | orchestrator | Friday 29 August 2025 14:54:15 +0000 (0:00:00.571) 0:00:03.512 ********* 2025-08-29 15:00:53.388775 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:53.388824 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:53.388845 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:53.388864 | orchestrator | 2025-08-29 15:00:53.388884 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-08-29 15:00:53.388904 | orchestrator | Friday 29 August 2025 14:54:16 +0000 (0:00:00.794) 0:00:04.306 ********* 2025-08-29 15:00:53.388923 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-08-29 15:00:53.388942 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-08-29 15:00:53.388959 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-08-29 15:00:53.388977 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-08-29 15:00:53.388995 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-08-29 15:00:53.389014 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-08-29 15:00:53.389036 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-08-29 15:00:53.389054 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-08-29 15:00:53.389265 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-08-29 15:00:53.389290 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-08-29 15:00:53.389308 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-08-29 15:00:53.389325 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-08-29 15:00:53.389343 | orchestrator | 2025-08-29 15:00:53.389360 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-08-29 15:00:53.389377 | orchestrator | Friday 29 August 2025 14:54:19 +0000 (0:00:03.362) 0:00:07.669 ********* 2025-08-29 15:00:53.389415 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-08-29 15:00:53.389435 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-08-29 15:00:53.389453 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-08-29 15:00:53.389472 | orchestrator | 2025-08-29 15:00:53.389491 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-08-29 15:00:53.389509 | orchestrator | Friday 29 August 2025 14:54:21 +0000 (0:00:01.191) 0:00:08.860 ********* 2025-08-29 15:00:53.389527 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-08-29 15:00:53.389545 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-08-29 15:00:53.389564 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-08-29 15:00:53.389583 | orchestrator | 2025-08-29 15:00:53.389602 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-08-29 15:00:53.389621 | orchestrator | Friday 29 August 2025 14:54:22 +0000 (0:00:01.702) 0:00:10.562 ********* 2025-08-29 15:00:53.389642 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-08-29 15:00:53.389709 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.389975 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-08-29 15:00:53.390008 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.390160 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-08-29 15:00:53.390181 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.390201 | orchestrator | 2025-08-29 15:00:53.390218 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-08-29 15:00:53.390237 | orchestrator | Friday 29 August 2025 14:54:23 +0000 (0:00:00.634) 0:00:11.197 ********* 2025-08-29 15:00:53.390261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-08-29 15:00:53.390288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-08-29 15:00:53.390308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-08-29 15:00:53.390327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 15:00:53.390672 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 15:00:53.390722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 15:00:53.390743 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 15:00:53.390771 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 15:00:53.390791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 15:00:53.390868 | orchestrator | 2025-08-29 15:00:53.390888 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-08-29 15:00:53.390959 | orchestrator | Friday 29 August 2025 14:54:27 +0000 (0:00:03.766) 0:00:14.963 ********* 2025-08-29 15:00:53.390979 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:53.390997 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:53.391015 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:53.391034 | orchestrator | 2025-08-29 15:00:53.391172 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-08-29 15:00:53.391195 | orchestrator | Friday 29 August 2025 14:54:28 +0000 (0:00:01.675) 0:00:16.638 ********* 2025-08-29 15:00:53.391214 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-08-29 15:00:53.391232 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-08-29 15:00:53.391250 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-08-29 15:00:53.391285 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-08-29 15:00:53.391304 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-08-29 15:00:53.391345 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-08-29 15:00:53.391380 | orchestrator | 2025-08-29 15:00:53.391399 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-08-29 15:00:53.391418 | orchestrator | Friday 29 August 2025 14:54:31 +0000 (0:00:02.488) 0:00:19.127 ********* 2025-08-29 15:00:53.391436 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:53.391454 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:53.391472 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:53.391490 | orchestrator | 2025-08-29 15:00:53.391508 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-08-29 15:00:53.391527 | orchestrator | Friday 29 August 2025 14:54:33 +0000 (0:00:01.988) 0:00:21.115 ********* 2025-08-29 15:00:53.391644 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:53.391682 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:53.391769 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:53.391792 | orchestrator | 2025-08-29 15:00:53.391841 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-08-29 15:00:53.391862 | orchestrator | Friday 29 August 2025 14:54:37 +0000 (0:00:03.910) 0:00:25.026 ********* 2025-08-29 15:00:53.391883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 15:00:53.391924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 15:00:53.391959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 15:00:53.391982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 15:00:53.392054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b833b631046688aa70cbf3c5458afda8e865bc65', '__omit_place_holder__b833b631046688aa70cbf3c5458afda8e865bc65'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-08-29 15:00:53.392192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 15:00:53.392218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 15:00:53.392239 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.392261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b833b631046688aa70cbf3c5458afda8e865bc65', '__omit_place_holder__b833b631046688aa70cbf3c5458afda8e865bc65'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-08-29 15:00:53.392281 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.392434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 15:00:53.392475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 15:00:53.392502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 15:00:53.392544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b833b631046688aa70cbf3c5458afda8e865bc65', '__omit_place_holder__b833b631046688aa70cbf3c5458afda8e865bc65'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-08-29 15:00:53.392569 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.392592 | orchestrator | 2025-08-29 15:00:53.392615 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-08-29 15:00:53.392639 | orchestrator | Friday 29 August 2025 14:54:38 +0000 (0:00:00.935) 0:00:25.963 ********* 2025-08-29 15:00:53.393000 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-08-29 15:00:53.393031 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-08-29 15:00:53.393072 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-08-29 15:00:53.393105 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 15:00:53.393130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 15:00:53.393169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b833b631046688aa70cbf3c5458afda8e865bc65', '__omit_place_holder__b833b631046688aa70cbf3c5458afda8e865bc65'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-08-29 15:00:53.393192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 15:00:53.393210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 15:00:53.393314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b833b631046688aa70cbf3c5458afda8e865bc65', '__omit_place_holder__b833b631046688aa70cbf3c5458afda8e865bc65'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-08-29 15:00:53.393350 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 15:00:53.393526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 15:00:53.393566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b833b631046688aa70cbf3c5458afda8e865bc65', '__omit_place_holder__b833b631046688aa70cbf3c5458afda8e865bc65'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-08-29 15:00:53.393585 | orchestrator | 2025-08-29 15:00:53.393603 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-08-29 15:00:53.393621 | orchestrator | Friday 29 August 2025 14:54:41 +0000 (0:00:03.429) 0:00:29.392 ********* 2025-08-29 15:00:53.393640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-08-29 15:00:53.393658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-08-29 15:00:53.393676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-08-29 15:00:53.393708 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 15:00:53.393735 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 15:00:53.393933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 15:00:53.393958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 15:00:53.393977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 15:00:53.393994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 15:00:53.394167 | orchestrator | 2025-08-29 15:00:53.394186 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-08-29 15:00:53.394201 | orchestrator | Friday 29 August 2025 14:54:45 +0000 (0:00:03.991) 0:00:33.384 ********* 2025-08-29 15:00:53.394215 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-08-29 15:00:53.394230 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-08-29 15:00:53.394244 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-08-29 15:00:53.394259 | orchestrator | 2025-08-29 15:00:53.394273 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-08-29 15:00:53.394287 | orchestrator | Friday 29 August 2025 14:54:49 +0000 (0:00:03.562) 0:00:36.946 ********* 2025-08-29 15:00:53.394302 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-08-29 15:00:53.394315 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-08-29 15:00:53.394329 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-08-29 15:00:53.394343 | orchestrator | 2025-08-29 15:00:53.394379 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-08-29 15:00:53.394395 | orchestrator | Friday 29 August 2025 14:54:56 +0000 (0:00:06.988) 0:00:43.935 ********* 2025-08-29 15:00:53.394408 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.394421 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.394433 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.394446 | orchestrator | 2025-08-29 15:00:53.394514 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-08-29 15:00:53.394529 | orchestrator | Friday 29 August 2025 14:54:56 +0000 (0:00:00.836) 0:00:44.772 ********* 2025-08-29 15:00:53.394551 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-08-29 15:00:53.394568 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-08-29 15:00:53.394581 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-08-29 15:00:53.394596 | orchestrator | 2025-08-29 15:00:53.394610 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-08-29 15:00:53.394623 | orchestrator | Friday 29 August 2025 14:55:01 +0000 (0:00:04.101) 0:00:48.873 ********* 2025-08-29 15:00:53.394739 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-08-29 15:00:53.394754 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-08-29 15:00:53.394768 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-08-29 15:00:53.394782 | orchestrator | 2025-08-29 15:00:53.394816 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-08-29 15:00:53.394831 | orchestrator | Friday 29 August 2025 14:55:04 +0000 (0:00:03.927) 0:00:52.800 ********* 2025-08-29 15:00:53.394845 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-08-29 15:00:53.394857 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-08-29 15:00:53.394871 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-08-29 15:00:53.394884 | orchestrator | 2025-08-29 15:00:53.394897 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-08-29 15:00:53.394909 | orchestrator | Friday 29 August 2025 14:55:06 +0000 (0:00:02.007) 0:00:54.807 ********* 2025-08-29 15:00:53.394922 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-08-29 15:00:53.394935 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-08-29 15:00:53.394948 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-08-29 15:00:53.394961 | orchestrator | 2025-08-29 15:00:53.394992 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-08-29 15:00:53.395006 | orchestrator | Friday 29 August 2025 14:55:09 +0000 (0:00:02.348) 0:00:57.156 ********* 2025-08-29 15:00:53.395019 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:53.395032 | orchestrator | 2025-08-29 15:00:53.395044 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-08-29 15:00:53.395076 | orchestrator | Friday 29 August 2025 14:55:10 +0000 (0:00:01.023) 0:00:58.180 ********* 2025-08-29 15:00:53.395091 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-08-29 15:00:53.395122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-08-29 15:00:53.395150 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-08-29 15:00:53.395172 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 15:00:53.395191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 15:00:53.395199 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 15:00:53.395208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 15:00:53.395216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 15:00:53.395231 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 15:00:53.395239 | orchestrator | 2025-08-29 15:00:53.395247 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-08-29 15:00:53.395255 | orchestrator | Friday 29 August 2025 14:55:14 +0000 (0:00:04.086) 0:01:02.266 ********* 2025-08-29 15:00:53.395273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 15:00:53.395286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 15:00:53.395296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 15:00:53.395310 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.395323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 15:00:53.395403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 15:00:53.395425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 15:00:53.395440 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.395454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 15:00:53.395498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 15:00:53.395514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 15:00:53.395529 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.395543 | orchestrator | 2025-08-29 15:00:53.395557 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-08-29 15:00:53.395571 | orchestrator | Friday 29 August 2025 14:55:15 +0000 (0:00:00.868) 0:01:03.134 ********* 2025-08-29 15:00:53.395585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 15:00:53.395600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 15:00:53.395624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 15:00:53.395638 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.395653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 15:00:53.395689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 15:00:53.395711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 15:00:53.395726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 15:00:53.395740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 15:00:53.395763 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.395866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 15:00:53.395883 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.395896 | orchestrator | 2025-08-29 15:00:53.395911 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-08-29 15:00:53.395925 | orchestrator | Friday 29 August 2025 14:55:16 +0000 (0:00:00.869) 0:01:04.004 ********* 2025-08-29 15:00:53.395940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 15:00:53.395967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 15:00:53.395982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 15:00:53.396021 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.396036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 15:00:53.396053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 15:00:53.396076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 15:00:53.396090 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.396104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 15:00:53.396222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 15:00:53.396247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 15:00:53.396262 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.396275 | orchestrator | 2025-08-29 15:00:53.396288 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-08-29 15:00:53.396301 | orchestrator | Friday 29 August 2025 14:55:17 +0000 (0:00:00.933) 0:01:04.937 ********* 2025-08-29 15:00:53.396322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 15:00:53.396338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 15:00:53.396362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 15:00:53.396376 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.396391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 15:00:53.396406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 15:00:53.396420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 15:00:53.396435 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.396459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 15:00:53.396480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 15:00:53.396552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 15:00:53.396580 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.396594 | orchestrator | 2025-08-29 15:00:53.396607 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-08-29 15:00:53.396621 | orchestrator | Friday 29 August 2025 14:55:18 +0000 (0:00:01.094) 0:01:06.031 ********* 2025-08-29 15:00:53.396634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 15:00:53.396647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 15:00:53.396674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 15:00:53.396735 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.396761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 15:00:53.396909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 15:00:53.396943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 15:00:53.396957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 15:00:53.396971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 15:00:53.396986 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.396999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 15:00:53.397013 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.397026 | orchestrator | 2025-08-29 15:00:53.397040 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-08-29 15:00:53.397053 | orchestrator | Friday 29 August 2025 14:55:20 +0000 (0:00:01.863) 0:01:07.894 ********* 2025-08-29 15:00:53.397067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 15:00:53.397138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 15:00:53.397166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 15:00:53.397178 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.397189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 15:00:53.397201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 15:00:53.397213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 15:00:53.397225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 15:00:53.397237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 15:00:53.397254 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.397284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 15:00:53.397305 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.397318 | orchestrator | 2025-08-29 15:00:53.397329 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-08-29 15:00:53.397342 | orchestrator | Friday 29 August 2025 14:55:21 +0000 (0:00:01.142) 0:01:09.037 ********* 2025-08-29 15:00:53.397352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 15:00:53.397365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 15:00:53.397377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 15:00:53.397389 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.397473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 15:00:53.397488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 15:00:53.397509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 15:00:53.397530 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.397543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 15:00:53.397556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 15:00:53.397567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 15:00:53.397578 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.397590 | orchestrator | 2025-08-29 15:00:53.397601 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-08-29 15:00:53.397641 | orchestrator | Friday 29 August 2025 14:55:21 +0000 (0:00:00.751) 0:01:09.788 ********* 2025-08-29 15:00:53.397698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-08-29 15:00:53.397709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 15:00:53.397747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 15:00:53.397769 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.397829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-08-29 15:00:53.397842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 15:00:53.397865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 15:00:53.397878 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.397890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-08-29 15:00:53.397901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-08-29 15:00:53.397913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-08-29 15:00:53.398098 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.398116 | orchestrator | 2025-08-29 15:00:53.398129 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-08-29 15:00:53.398140 | orchestrator | Friday 29 August 2025 14:55:23 +0000 (0:00:01.098) 0:01:10.886 ********* 2025-08-29 15:00:53.398152 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-08-29 15:00:53.398164 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-08-29 15:00:53.398186 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-08-29 15:00:53.398218 | orchestrator | 2025-08-29 15:00:53.398292 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-08-29 15:00:53.398305 | orchestrator | Friday 29 August 2025 14:55:25 +0000 (0:00:02.000) 0:01:12.886 ********* 2025-08-29 15:00:53.398317 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-08-29 15:00:53.398328 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-08-29 15:00:53.398341 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-08-29 15:00:53.398353 | orchestrator | 2025-08-29 15:00:53.398373 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-08-29 15:00:53.398386 | orchestrator | Friday 29 August 2025 14:55:26 +0000 (0:00:01.596) 0:01:14.483 ********* 2025-08-29 15:00:53.398397 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-08-29 15:00:53.398408 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-08-29 15:00:53.398420 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-08-29 15:00:53.398433 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.398445 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-08-29 15:00:53.398457 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-08-29 15:00:53.398469 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.398481 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-08-29 15:00:53.398494 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.398506 | orchestrator | 2025-08-29 15:00:53.398517 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-08-29 15:00:53.398529 | orchestrator | Friday 29 August 2025 14:55:27 +0000 (0:00:01.062) 0:01:15.546 ********* 2025-08-29 15:00:53.398542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-08-29 15:00:53.398649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-08-29 15:00:53.398677 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-08-29 15:00:53.398763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 15:00:53.398787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 15:00:53.398825 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-08-29 15:00:53.398838 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 15:00:53.398851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 15:00:53.398864 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-08-29 15:00:53.398885 | orchestrator | 2025-08-29 15:00:53.398897 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-08-29 15:00:53.398909 | orchestrator | Friday 29 August 2025 14:55:30 +0000 (0:00:02.448) 0:01:17.994 ********* 2025-08-29 15:00:53.398921 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:53.398933 | orchestrator | 2025-08-29 15:00:53.398945 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-08-29 15:00:53.398956 | orchestrator | Friday 29 August 2025 14:55:31 +0000 (0:00:00.823) 0:01:18.817 ********* 2025-08-29 15:00:53.398969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-08-29 15:00:53.398997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-08-29 15:00:53.399010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.399022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.399035 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-08-29 15:00:53.399085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-08-29 15:00:53.399098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.399119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.399137 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-08-29 15:00:53.399150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-08-29 15:00:53.399163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.399182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.399194 | orchestrator | 2025-08-29 15:00:53.399206 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-08-29 15:00:53.399218 | orchestrator | Friday 29 August 2025 14:55:35 +0000 (0:00:04.518) 0:01:23.336 ********* 2025-08-29 15:00:53.399231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-08-29 15:00:53.399250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-08-29 15:00:53.399267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.399280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.399293 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.399332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-08-29 15:00:53.399352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-08-29 15:00:53.399365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.399377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.399389 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.399413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-08-29 15:00:53.399426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-08-29 15:00:53.399445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.399456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.399468 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.399480 | orchestrator | 2025-08-29 15:00:53.399492 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-08-29 15:00:53.399505 | orchestrator | Friday 29 August 2025 14:55:36 +0000 (0:00:00.977) 0:01:24.314 ********* 2025-08-29 15:00:53.399518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-08-29 15:00:53.399531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-08-29 15:00:53.399545 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.399557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-08-29 15:00:53.399570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-08-29 15:00:53.399581 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.399594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-08-29 15:00:53.399606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-08-29 15:00:53.399618 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.399630 | orchestrator | 2025-08-29 15:00:53.399649 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-08-29 15:00:53.399660 | orchestrator | Friday 29 August 2025 14:55:37 +0000 (0:00:01.018) 0:01:25.333 ********* 2025-08-29 15:00:53.399672 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:53.399685 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:53.399697 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:53.399709 | orchestrator | 2025-08-29 15:00:53.399722 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-08-29 15:00:53.399734 | orchestrator | Friday 29 August 2025 14:55:38 +0000 (0:00:01.343) 0:01:26.677 ********* 2025-08-29 15:00:53.399746 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:53.399758 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:53.399770 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:53.399782 | orchestrator | 2025-08-29 15:00:53.399854 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-08-29 15:00:53.399884 | orchestrator | Friday 29 August 2025 14:55:41 +0000 (0:00:02.330) 0:01:29.007 ********* 2025-08-29 15:00:53.399897 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:53.399909 | orchestrator | 2025-08-29 15:00:53.399922 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-08-29 15:00:53.399934 | orchestrator | Friday 29 August 2025 14:55:42 +0000 (0:00:00.921) 0:01:29.929 ********* 2025-08-29 15:00:53.399946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 15:00:53.399959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 15:00:53.399972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.399986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.400011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.400034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.400048 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 15:00:53.400061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.400073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.400085 | orchestrator | 2025-08-29 15:00:53.400097 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-08-29 15:00:53.400109 | orchestrator | Friday 29 August 2025 14:55:46 +0000 (0:00:04.391) 0:01:34.320 ********* 2025-08-29 15:00:53.400128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 15:00:53.400155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.400169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.400180 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.400193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 15:00:53.400205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.400217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.400229 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.400253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 15:00:53.400273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.400285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.400296 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.400307 | orchestrator | 2025-08-29 15:00:53.400319 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-08-29 15:00:53.400331 | orchestrator | Friday 29 August 2025 14:55:47 +0000 (0:00:00.654) 0:01:34.975 ********* 2025-08-29 15:00:53.400343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-08-29 15:00:53.400355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-08-29 15:00:53.400367 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.400380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-08-29 15:00:53.400391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-08-29 15:00:53.400402 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.400413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-08-29 15:00:53.400425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-08-29 15:00:53.400444 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.400456 | orchestrator | 2025-08-29 15:00:53.400468 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-08-29 15:00:53.400480 | orchestrator | Friday 29 August 2025 14:55:48 +0000 (0:00:01.028) 0:01:36.003 ********* 2025-08-29 15:00:53.400491 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:53.400503 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:53.400514 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:53.400526 | orchestrator | 2025-08-29 15:00:53.400537 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-08-29 15:00:53.400548 | orchestrator | Friday 29 August 2025 14:55:49 +0000 (0:00:01.402) 0:01:37.406 ********* 2025-08-29 15:00:53.400560 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:53.400571 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:53.400583 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:53.400594 | orchestrator | 2025-08-29 15:00:53.400611 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-08-29 15:00:53.400624 | orchestrator | Friday 29 August 2025 14:55:51 +0000 (0:00:02.047) 0:01:39.454 ********* 2025-08-29 15:00:53.400635 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.400646 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.400658 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.400670 | orchestrator | 2025-08-29 15:00:53.400681 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-08-29 15:00:53.400693 | orchestrator | Friday 29 August 2025 14:55:52 +0000 (0:00:00.414) 0:01:39.869 ********* 2025-08-29 15:00:53.400705 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:53.400716 | orchestrator | 2025-08-29 15:00:53.400733 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-08-29 15:00:53.400745 | orchestrator | Friday 29 August 2025 14:55:52 +0000 (0:00:00.668) 0:01:40.537 ********* 2025-08-29 15:00:53.400758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-08-29 15:00:53.400770 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-08-29 15:00:53.400782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-08-29 15:00:53.400818 | orchestrator | 2025-08-29 15:00:53.400831 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-08-29 15:00:53.400842 | orchestrator | Friday 29 August 2025 14:55:55 +0000 (0:00:02.881) 0:01:43.418 ********* 2025-08-29 15:00:53.400860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-08-29 15:00:53.400872 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.400889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-08-29 15:00:53.400902 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.400913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-08-29 15:00:53.400924 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.400936 | orchestrator | 2025-08-29 15:00:53.400947 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-08-29 15:00:53.400959 | orchestrator | Friday 29 August 2025 14:55:57 +0000 (0:00:01.568) 0:01:44.987 ********* 2025-08-29 15:00:53.400971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-08-29 15:00:53.400992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-08-29 15:00:53.401005 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.401016 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-08-29 15:00:53.401027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-08-29 15:00:53.401038 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.401056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-08-29 15:00:53.401073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-08-29 15:00:53.401086 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.401098 | orchestrator | 2025-08-29 15:00:53.401110 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-08-29 15:00:53.401122 | orchestrator | Friday 29 August 2025 14:55:58 +0000 (0:00:01.727) 0:01:46.715 ********* 2025-08-29 15:00:53.401134 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.401146 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.401158 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.401170 | orchestrator | 2025-08-29 15:00:53.401181 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-08-29 15:00:53.401193 | orchestrator | Friday 29 August 2025 14:55:59 +0000 (0:00:00.739) 0:01:47.454 ********* 2025-08-29 15:00:53.401203 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.401213 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.401224 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.401233 | orchestrator | 2025-08-29 15:00:53.401244 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-08-29 15:00:53.401256 | orchestrator | Friday 29 August 2025 14:56:01 +0000 (0:00:01.371) 0:01:48.826 ********* 2025-08-29 15:00:53.401268 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:53.401280 | orchestrator | 2025-08-29 15:00:53.401291 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-08-29 15:00:53.401304 | orchestrator | Friday 29 August 2025 14:56:01 +0000 (0:00:00.750) 0:01:49.577 ********* 2025-08-29 15:00:53.401324 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 15:00:53.401337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.401350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.401370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.401389 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 15:00:53.401415 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 15:00:53.401428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.401441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.401460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.401478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.401492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.401512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.401525 | orchestrator | 2025-08-29 15:00:53.401537 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-08-29 15:00:53.401549 | orchestrator | Friday 29 August 2025 14:56:05 +0000 (0:00:03.912) 0:01:53.490 ********* 2025-08-29 15:00:53.401562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 15:00:53.401575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.401601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.401614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 15:00:53.401634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.401647 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.401660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.401673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.401704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.401722 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.401735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 15:00:53.401755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.401768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.401781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.401793 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.401863 | orchestrator | 2025-08-29 15:00:53.401875 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-08-29 15:00:53.401887 | orchestrator | Friday 29 August 2025 14:56:06 +0000 (0:00:01.149) 0:01:54.640 ********* 2025-08-29 15:00:53.401899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-08-29 15:00:53.401916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-08-29 15:00:53.401927 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.401937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-08-29 15:00:53.401952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-08-29 15:00:53.401969 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.401980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-08-29 15:00:53.401990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-08-29 15:00:53.402000 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.402011 | orchestrator | 2025-08-29 15:00:53.402060 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-08-29 15:00:53.402071 | orchestrator | Friday 29 August 2025 14:56:08 +0000 (0:00:01.280) 0:01:55.920 ********* 2025-08-29 15:00:53.402081 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:53.402092 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:53.402102 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:53.402140 | orchestrator | 2025-08-29 15:00:53.402151 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-08-29 15:00:53.402162 | orchestrator | Friday 29 August 2025 14:56:09 +0000 (0:00:01.432) 0:01:57.353 ********* 2025-08-29 15:00:53.402172 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:53.402182 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:53.402192 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:53.402202 | orchestrator | 2025-08-29 15:00:53.402212 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-08-29 15:00:53.402222 | orchestrator | Friday 29 August 2025 14:56:12 +0000 (0:00:02.528) 0:01:59.881 ********* 2025-08-29 15:00:53.402232 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.402243 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.402253 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.402263 | orchestrator | 2025-08-29 15:00:53.402273 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-08-29 15:00:53.402283 | orchestrator | Friday 29 August 2025 14:56:12 +0000 (0:00:00.611) 0:02:00.492 ********* 2025-08-29 15:00:53.402294 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.402304 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.402314 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.402324 | orchestrator | 2025-08-29 15:00:53.402334 | orchestrator | TASK [include_role : designate] ************************************************ 2025-08-29 15:00:53.402344 | orchestrator | Friday 29 August 2025 14:56:13 +0000 (0:00:00.462) 0:02:00.955 ********* 2025-08-29 15:00:53.402355 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:53.402365 | orchestrator | 2025-08-29 15:00:53.402375 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-08-29 15:00:53.402385 | orchestrator | Friday 29 August 2025 14:56:14 +0000 (0:00:01.180) 0:02:02.135 ********* 2025-08-29 15:00:53.402395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 15:00:53.402427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 15:00:53.402443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.402455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.402465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.402475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.402486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.402496 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 15:00:53.402517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 15:00:53.402528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.402559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 15:00:53.402570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.402581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.402596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 15:00:53.402613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.402628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.402639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.402649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.402659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.402669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.402688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.402698 | orchestrator | 2025-08-29 15:00:53.402708 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-08-29 15:00:53.402718 | orchestrator | Friday 29 August 2025 14:56:19 +0000 (0:00:05.145) 0:02:07.281 ********* 2025-08-29 15:00:53.402738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 15:00:53.402748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 15:00:53.402758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.402767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.402776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.402792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.402826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.402835 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.402849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 15:00:53.402860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 15:00:53.402869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 15:00:53.402884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.402894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 15:00:53.402909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.402922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.402932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.402941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.402950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.402965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.402976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.402985 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.403001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.403016 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.403027 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.403037 | orchestrator | 2025-08-29 15:00:53.403047 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-08-29 15:00:53.403056 | orchestrator | Friday 29 August 2025 14:56:20 +0000 (0:00:01.053) 0:02:08.335 ********* 2025-08-29 15:00:53.403066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-08-29 15:00:53.403075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-08-29 15:00:53.403085 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.403094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-08-29 15:00:53.403103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-08-29 15:00:53.403120 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.403130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-08-29 15:00:53.403140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-08-29 15:00:53.403150 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.403160 | orchestrator | 2025-08-29 15:00:53.403170 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-08-29 15:00:53.403180 | orchestrator | Friday 29 August 2025 14:56:21 +0000 (0:00:01.282) 0:02:09.617 ********* 2025-08-29 15:00:53.403189 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:53.403199 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:53.403208 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:53.403216 | orchestrator | 2025-08-29 15:00:53.403226 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-08-29 15:00:53.403235 | orchestrator | Friday 29 August 2025 14:56:23 +0000 (0:00:01.349) 0:02:10.967 ********* 2025-08-29 15:00:53.403245 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:53.403254 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:53.403263 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:53.403272 | orchestrator | 2025-08-29 15:00:53.403282 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-08-29 15:00:53.403290 | orchestrator | Friday 29 August 2025 14:56:25 +0000 (0:00:02.100) 0:02:13.068 ********* 2025-08-29 15:00:53.403299 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.403308 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.403317 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.403327 | orchestrator | 2025-08-29 15:00:53.403336 | orchestrator | TASK [include_role : glance] *************************************************** 2025-08-29 15:00:53.403344 | orchestrator | Friday 29 August 2025 14:56:25 +0000 (0:00:00.615) 0:02:13.683 ********* 2025-08-29 15:00:53.403353 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:53.403363 | orchestrator | 2025-08-29 15:00:53.403372 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-08-29 15:00:53.403382 | orchestrator | Friday 29 August 2025 14:56:26 +0000 (0:00:00.853) 0:02:14.537 ********* 2025-08-29 15:00:53.403407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 15:00:53.403426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-08-29 15:00:53.403447 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 15:00:53.403464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-08-29 15:00:53.403483 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 15:00:53.403508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-08-29 15:00:53.403525 | orchestrator | 2025-08-29 15:00:53.403535 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-08-29 15:00:53.403545 | orchestrator | Friday 29 August 2025 14:56:31 +0000 (0:00:04.383) 0:02:18.921 ********* 2025-08-29 15:00:53.403561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 15:00:53.403579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-08-29 15:00:53.403595 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.403606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 15:00:53.403627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-08-29 15:00:53.403644 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.403654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 15:00:53.403675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-08-29 15:00:53.403693 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.403703 | orchestrator | 2025-08-29 15:00:53.403713 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-08-29 15:00:53.403723 | orchestrator | Friday 29 August 2025 14:56:34 +0000 (0:00:03.123) 0:02:22.044 ********* 2025-08-29 15:00:53.403734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-08-29 15:00:53.403745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-08-29 15:00:53.403755 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.403765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-08-29 15:00:53.403776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-08-29 15:00:53.403785 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.403794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-08-29 15:00:53.403836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra':2025-08-29 15:00:53 | INFO  | Task 2ec287c2-cdb8-481f-80f8-8cd56f96c3e8 is in state STARTED 2025-08-29 15:00:53.403847 | orchestrator | 2025-08-29 15:00:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:53.403867 | orchestrator | ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-08-29 15:00:53.403876 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.403885 | orchestrator | 2025-08-29 15:00:53.403894 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-08-29 15:00:53.403903 | orchestrator | Friday 29 August 2025 14:56:37 +0000 (0:00:03.279) 0:02:25.323 ********* 2025-08-29 15:00:53.403911 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:53.403920 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:53.403928 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:53.403937 | orchestrator | 2025-08-29 15:00:53.403946 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-08-29 15:00:53.403954 | orchestrator | Friday 29 August 2025 14:56:38 +0000 (0:00:01.338) 0:02:26.662 ********* 2025-08-29 15:00:53.403963 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:53.403972 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:53.403981 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:53.403990 | orchestrator | 2025-08-29 15:00:53.404000 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-08-29 15:00:53.404010 | orchestrator | Friday 29 August 2025 14:56:41 +0000 (0:00:02.311) 0:02:28.973 ********* 2025-08-29 15:00:53.404019 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.404028 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.404036 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.404046 | orchestrator | 2025-08-29 15:00:53.404055 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-08-29 15:00:53.404063 | orchestrator | Friday 29 August 2025 14:56:41 +0000 (0:00:00.557) 0:02:29.531 ********* 2025-08-29 15:00:53.404072 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:53.404080 | orchestrator | 2025-08-29 15:00:53.404089 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-08-29 15:00:53.404097 | orchestrator | Friday 29 August 2025 14:56:42 +0000 (0:00:00.904) 0:02:30.436 ********* 2025-08-29 15:00:53.404106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 15:00:53.404117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 15:00:53.404126 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 15:00:53.404143 | orchestrator | 2025-08-29 15:00:53.404153 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-08-29 15:00:53.404162 | orchestrator | Friday 29 August 2025 14:56:46 +0000 (0:00:03.863) 0:02:34.299 ********* 2025-08-29 15:00:53.404184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 15:00:53.404196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 15:00:53.404205 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.404215 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.404224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 15:00:53.404234 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.404243 | orchestrator | 2025-08-29 15:00:53.404253 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-08-29 15:00:53.404262 | orchestrator | Friday 29 August 2025 14:56:47 +0000 (0:00:00.671) 0:02:34.970 ********* 2025-08-29 15:00:53.404271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-08-29 15:00:53.404281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-08-29 15:00:53.404290 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.404299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-08-29 15:00:53.404311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-08-29 15:00:53.404339 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.404355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-08-29 15:00:53.404369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-08-29 15:00:53.404383 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.404397 | orchestrator | 2025-08-29 15:00:53.404409 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-08-29 15:00:53.404421 | orchestrator | Friday 29 August 2025 14:56:47 +0000 (0:00:00.708) 0:02:35.679 ********* 2025-08-29 15:00:53.404433 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:53.404446 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:53.404459 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:53.404473 | orchestrator | 2025-08-29 15:00:53.404485 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-08-29 15:00:53.404498 | orchestrator | Friday 29 August 2025 14:56:49 +0000 (0:00:01.362) 0:02:37.041 ********* 2025-08-29 15:00:53.404513 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:53.404525 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:53.404538 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:53.404550 | orchestrator | 2025-08-29 15:00:53.404575 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-08-29 15:00:53.404591 | orchestrator | Friday 29 August 2025 14:56:51 +0000 (0:00:02.081) 0:02:39.123 ********* 2025-08-29 15:00:53.404605 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.404617 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.404631 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.404644 | orchestrator | 2025-08-29 15:00:53.404657 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-08-29 15:00:53.404671 | orchestrator | Friday 29 August 2025 14:56:51 +0000 (0:00:00.571) 0:02:39.694 ********* 2025-08-29 15:00:53.404686 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:53.404699 | orchestrator | 2025-08-29 15:00:53.404723 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-08-29 15:00:53.404739 | orchestrator | Friday 29 August 2025 14:56:52 +0000 (0:00:00.925) 0:02:40.620 ********* 2025-08-29 15:00:53.405007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 15:00:53.405065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 15:00:53.405094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 15:00:53.405116 | orchestrator | 2025-08-29 15:00:53.405129 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-08-29 15:00:53.405142 | orchestrator | Friday 29 August 2025 14:56:57 +0000 (0:00:05.025) 0:02:45.646 ********* 2025-08-29 15:00:53.405169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 15:00:53.405184 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.405202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 15:00:53.405219 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.405233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 15:00:53.405247 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.405260 | orchestrator | 2025-08-29 15:00:53.405273 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-08-29 15:00:53.405295 | orchestrator | Friday 29 August 2025 14:56:59 +0000 (0:00:01.357) 0:02:47.004 ********* 2025-08-29 15:00:53.405316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-08-29 15:00:53.405329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-08-29 15:00:53.405343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-08-29 15:00:53.405356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-08-29 15:00:53.405369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-08-29 15:00:53.405381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-08-29 15:00:53.405393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-08-29 15:00:53.405407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-08-29 15:00:53.405419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-08-29 15:00:53.405436 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.405444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-08-29 15:00:53.405453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-08-29 15:00:53.405462 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.405470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-08-29 15:00:53.405485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-08-29 15:00:53.405499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-08-29 15:00:53.405509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-08-29 15:00:53.405517 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.405525 | orchestrator | 2025-08-29 15:00:53.405534 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-08-29 15:00:53.405543 | orchestrator | Friday 29 August 2025 14:57:00 +0000 (0:00:01.275) 0:02:48.279 ********* 2025-08-29 15:00:53.405581 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:53.405590 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:53.405599 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:53.405608 | orchestrator | 2025-08-29 15:00:53.405616 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-08-29 15:00:53.405625 | orchestrator | Friday 29 August 2025 14:57:01 +0000 (0:00:01.304) 0:02:49.583 ********* 2025-08-29 15:00:53.405633 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:53.405643 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:53.405652 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:53.405662 | orchestrator | 2025-08-29 15:00:53.405672 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-08-29 15:00:53.405683 | orchestrator | Friday 29 August 2025 14:57:03 +0000 (0:00:01.983) 0:02:51.567 ********* 2025-08-29 15:00:53.405692 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.405702 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.405711 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.405721 | orchestrator | 2025-08-29 15:00:53.405731 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-08-29 15:00:53.405741 | orchestrator | Friday 29 August 2025 14:57:04 +0000 (0:00:00.307) 0:02:51.875 ********* 2025-08-29 15:00:53.405752 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.405762 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.405772 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.405782 | orchestrator | 2025-08-29 15:00:53.405792 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-08-29 15:00:53.405862 | orchestrator | Friday 29 August 2025 14:57:04 +0000 (0:00:00.541) 0:02:52.416 ********* 2025-08-29 15:00:53.405873 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:53.405883 | orchestrator | 2025-08-29 15:00:53.405893 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-08-29 15:00:53.405903 | orchestrator | Friday 29 August 2025 14:57:05 +0000 (0:00:01.382) 0:02:53.799 ********* 2025-08-29 15:00:53.405921 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:00:53.405942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 15:00:53.405960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:00:53.405970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 15:00:53.405980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 15:00:53.405989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:00:53.406011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 15:00:53.406061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 15:00:53.406078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 15:00:53.406087 | orchestrator | 2025-08-29 15:00:53.406096 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-08-29 15:00:53.406104 | orchestrator | Friday 29 August 2025 14:57:09 +0000 (0:00:03.476) 0:02:57.276 ********* 2025-08-29 15:00:53.406112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 15:00:53.406121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 15:00:53.406141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 15:00:53.406150 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.406159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 15:00:53.406174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 15:00:53.406184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 15:00:53.406192 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.406201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 15:00:53.406218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 15:00:53.406231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 15:00:53.406241 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.406249 | orchestrator | 2025-08-29 15:00:53.406258 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-08-29 15:00:53.406267 | orchestrator | Friday 29 August 2025 14:57:10 +0000 (0:00:01.038) 0:02:58.314 ********* 2025-08-29 15:00:53.406275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-08-29 15:00:53.406283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-08-29 15:00:53.406292 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.406315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-08-29 15:00:53.406325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-08-29 15:00:53.406334 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.406342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-08-29 15:00:53.406351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-08-29 15:00:53.406360 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.406368 | orchestrator | 2025-08-29 15:00:53.406376 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-08-29 15:00:53.406384 | orchestrator | Friday 29 August 2025 14:57:11 +0000 (0:00:00.829) 0:02:59.144 ********* 2025-08-29 15:00:53.406398 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:53.406406 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:53.406414 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:53.406422 | orchestrator | 2025-08-29 15:00:53.406431 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-08-29 15:00:53.406439 | orchestrator | Friday 29 August 2025 14:57:12 +0000 (0:00:01.352) 0:03:00.496 ********* 2025-08-29 15:00:53.406447 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:53.406455 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:53.406463 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:53.406472 | orchestrator | 2025-08-29 15:00:53.406481 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-08-29 15:00:53.406489 | orchestrator | Friday 29 August 2025 14:57:14 +0000 (0:00:02.211) 0:03:02.708 ********* 2025-08-29 15:00:53.406497 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.406504 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.406512 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.406520 | orchestrator | 2025-08-29 15:00:53.406528 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-08-29 15:00:53.406537 | orchestrator | Friday 29 August 2025 14:57:15 +0000 (0:00:00.681) 0:03:03.390 ********* 2025-08-29 15:00:53.406545 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:53.406553 | orchestrator | 2025-08-29 15:00:53.406562 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-08-29 15:00:53.406570 | orchestrator | Friday 29 August 2025 14:57:16 +0000 (0:00:01.012) 0:03:04.402 ********* 2025-08-29 15:00:53.406578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:00:53.406610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.406627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:00:53.406643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.406652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:00:53.406664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.406673 | orchestrator | 2025-08-29 15:00:53.406681 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-08-29 15:00:53.406690 | orchestrator | Friday 29 August 2025 14:57:19 +0000 (0:00:03.323) 0:03:07.725 ********* 2025-08-29 15:00:53.406699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 15:00:53.406713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.406726 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.406735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 15:00:53.406744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.406753 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.406765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 15:00:53.406774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.406787 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.406808 | orchestrator | 2025-08-29 15:00:53.406821 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-08-29 15:00:53.406830 | orchestrator | Friday 29 August 2025 14:57:20 +0000 (0:00:01.069) 0:03:08.794 ********* 2025-08-29 15:00:53.406838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-08-29 15:00:53.406847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-08-29 15:00:53.406855 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.406863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-08-29 15:00:53.406871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-08-29 15:00:53.406879 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.406888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-08-29 15:00:53.406897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-08-29 15:00:53.406905 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.406913 | orchestrator | 2025-08-29 15:00:53.406921 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-08-29 15:00:53.406929 | orchestrator | Friday 29 August 2025 14:57:21 +0000 (0:00:00.886) 0:03:09.681 ********* 2025-08-29 15:00:53.406937 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:53.406945 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:53.406953 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:53.406961 | orchestrator | 2025-08-29 15:00:53.406969 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-08-29 15:00:53.406977 | orchestrator | Friday 29 August 2025 14:57:23 +0000 (0:00:01.414) 0:03:11.095 ********* 2025-08-29 15:00:53.406986 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:53.406994 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:53.407002 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:53.407010 | orchestrator | 2025-08-29 15:00:53.407017 | orchestrator | TASK [include_role : manila] *************************************************** 2025-08-29 15:00:53.407025 | orchestrator | Friday 29 August 2025 14:57:25 +0000 (0:00:02.073) 0:03:13.169 ********* 2025-08-29 15:00:53.407033 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:53.407041 | orchestrator | 2025-08-29 15:00:53.407049 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-08-29 15:00:53.407057 | orchestrator | Friday 29 August 2025 14:57:26 +0000 (0:00:01.323) 0:03:14.493 ********* 2025-08-29 15:00:53.407069 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-08-29 15:00:53.407087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.407101 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-08-29 15:00:53.407109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.407118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.407126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.407138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.407153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.407167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-08-29 15:00:53.407176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.407185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.407193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.407201 | orchestrator | 2025-08-29 15:00:53.407210 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-08-29 15:00:53.407218 | orchestrator | Friday 29 August 2025 14:57:31 +0000 (0:00:04.844) 0:03:19.338 ********* 2025-08-29 15:00:53.407229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-08-29 15:00:53.407245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.407258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.407266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.407275 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.407284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-08-29 15:00:53.407292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.407308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.407317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.407324 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.407332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-08-29 15:00:53.407338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.407343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.407348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.407356 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.407361 | orchestrator | 2025-08-29 15:00:53.407366 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-08-29 15:00:53.407376 | orchestrator | Friday 29 August 2025 14:57:32 +0000 (0:00:00.717) 0:03:20.055 ********* 2025-08-29 15:00:53.407381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-08-29 15:00:53.407386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-08-29 15:00:53.407391 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.407396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-08-29 15:00:53.407401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-08-29 15:00:53.407406 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.407411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-08-29 15:00:53.407415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-08-29 15:00:53.407420 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.407426 | orchestrator | 2025-08-29 15:00:53.407430 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-08-29 15:00:53.407435 | orchestrator | Friday 29 August 2025 14:57:33 +0000 (0:00:01.491) 0:03:21.547 ********* 2025-08-29 15:00:53.407443 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:53.407448 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:53.407453 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:53.407457 | orchestrator | 2025-08-29 15:00:53.407462 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-08-29 15:00:53.407467 | orchestrator | Friday 29 August 2025 14:57:35 +0000 (0:00:01.301) 0:03:22.849 ********* 2025-08-29 15:00:53.407472 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:53.407477 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:53.407482 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:53.407486 | orchestrator | 2025-08-29 15:00:53.407491 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-08-29 15:00:53.407496 | orchestrator | Friday 29 August 2025 14:57:37 +0000 (0:00:02.232) 0:03:25.081 ********* 2025-08-29 15:00:53.407501 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:53.407506 | orchestrator | 2025-08-29 15:00:53.407510 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-08-29 15:00:53.407515 | orchestrator | Friday 29 August 2025 14:57:38 +0000 (0:00:01.350) 0:03:26.431 ********* 2025-08-29 15:00:53.407520 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-08-29 15:00:53.407525 | orchestrator | 2025-08-29 15:00:53.407530 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-08-29 15:00:53.407535 | orchestrator | Friday 29 August 2025 14:57:41 +0000 (0:00:02.785) 0:03:29.217 ********* 2025-08-29 15:00:53.407544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 15:00:53.407554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-08-29 15:00:53.407559 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.407569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 15:00:53.407579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-08-29 15:00:53.407584 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.407592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 15:00:53.407602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-08-29 15:00:53.407607 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.407612 | orchestrator | 2025-08-29 15:00:53.407617 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-08-29 15:00:53.407622 | orchestrator | Friday 29 August 2025 14:57:43 +0000 (0:00:02.311) 0:03:31.528 ********* 2025-08-29 15:00:53.407627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 15:00:53.407638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-08-29 15:00:53.407644 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.407653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 15:00:53.407658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-08-29 15:00:53.407668 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.407676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 15:00:53.407682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-08-29 15:00:53.407687 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.407692 | orchestrator | 2025-08-29 15:00:53.407697 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-08-29 15:00:53.407702 | orchestrator | Friday 29 August 2025 14:57:46 +0000 (0:00:02.485) 0:03:34.014 ********* 2025-08-29 15:00:53.407710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-08-29 15:00:53.407716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-08-29 15:00:53.407724 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.407729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-08-29 15:00:53.407734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-08-29 15:00:53.407739 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.407748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-08-29 15:00:53.407753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-08-29 15:00:53.407758 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.407763 | orchestrator | 2025-08-29 15:00:53.407768 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-08-29 15:00:53.407773 | orchestrator | Friday 29 August 2025 14:57:48 +0000 (0:00:02.328) 0:03:36.342 ********* 2025-08-29 15:00:53.407778 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:53.407782 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:53.407787 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:53.407792 | orchestrator | 2025-08-29 15:00:53.407814 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-08-29 15:00:53.407822 | orchestrator | Friday 29 August 2025 14:57:50 +0000 (0:00:01.726) 0:03:38.069 ********* 2025-08-29 15:00:53.407829 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.407834 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.407838 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.407843 | orchestrator | 2025-08-29 15:00:53.407848 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-08-29 15:00:53.407858 | orchestrator | Friday 29 August 2025 14:57:51 +0000 (0:00:01.201) 0:03:39.270 ********* 2025-08-29 15:00:53.407866 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.407872 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.407880 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.407887 | orchestrator | 2025-08-29 15:00:53.407896 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-08-29 15:00:53.407903 | orchestrator | Friday 29 August 2025 14:57:51 +0000 (0:00:00.282) 0:03:39.553 ********* 2025-08-29 15:00:53.407911 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:53.407918 | orchestrator | 2025-08-29 15:00:53.407927 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-08-29 15:00:53.407934 | orchestrator | Friday 29 August 2025 14:57:52 +0000 (0:00:01.136) 0:03:40.689 ********* 2025-08-29 15:00:53.407943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-08-29 15:00:53.407952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-08-29 15:00:53.407965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-08-29 15:00:53.407974 | orchestrator | 2025-08-29 15:00:53.407982 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-08-29 15:00:53.407990 | orchestrator | Friday 29 August 2025 14:57:54 +0000 (0:00:01.436) 0:03:42.126 ********* 2025-08-29 15:00:53.407997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-08-29 15:00:53.408011 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.408025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-08-29 15:00:53.408034 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.408042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-08-29 15:00:53.408051 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.408059 | orchestrator | 2025-08-29 15:00:53.408068 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-08-29 15:00:53.408073 | orchestrator | Friday 29 August 2025 14:57:54 +0000 (0:00:00.355) 0:03:42.481 ********* 2025-08-29 15:00:53.408078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-08-29 15:00:53.408083 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.408088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-08-29 15:00:53.408092 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.408097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-08-29 15:00:53.408102 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.408107 | orchestrator | 2025-08-29 15:00:53.408112 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-08-29 15:00:53.408117 | orchestrator | Friday 29 August 2025 14:57:55 +0000 (0:00:00.893) 0:03:43.375 ********* 2025-08-29 15:00:53.408121 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.408129 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.408134 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.408139 | orchestrator | 2025-08-29 15:00:53.408144 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-08-29 15:00:53.408148 | orchestrator | Friday 29 August 2025 14:57:56 +0000 (0:00:00.477) 0:03:43.852 ********* 2025-08-29 15:00:53.408160 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.408164 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.408169 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.408174 | orchestrator | 2025-08-29 15:00:53.408179 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-08-29 15:00:53.408183 | orchestrator | Friday 29 August 2025 14:57:57 +0000 (0:00:01.340) 0:03:45.192 ********* 2025-08-29 15:00:53.408188 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.408193 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.408198 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.408202 | orchestrator | 2025-08-29 15:00:53.408207 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-08-29 15:00:53.408212 | orchestrator | Friday 29 August 2025 14:57:57 +0000 (0:00:00.331) 0:03:45.524 ********* 2025-08-29 15:00:53.408216 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:53.408221 | orchestrator | 2025-08-29 15:00:53.408226 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-08-29 15:00:53.408231 | orchestrator | Friday 29 August 2025 14:57:59 +0000 (0:00:01.446) 0:03:46.971 ********* 2025-08-29 15:00:53.408239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:00:53.408245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.408250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.408268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.408277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-08-29 15:00:53.408283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.408412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 15:00:53.408422 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:00:53.408428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 15:00:53.408433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.408447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.408452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.408493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:00:53.408500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.408506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.408516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-08-29 15:00:53.408524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-08-29 15:00:53.408529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 15:00:53.408557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.408564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.408569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 15:00:53.408575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-08-29 15:00:53.408588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 15:00:53.408593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:00:53.408667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.408674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:00:53.408679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.408689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-08-29 15:00:53.408694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 15:00:53.408702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.408736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-08-29 15:00:53.408743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:00:53.408749 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:00:53.408758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.408766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.408772 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.408820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-08-29 15:00:53.408828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.408833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 15:00:53.408884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 15:00:53.408893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.408898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:00:53.408927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.408933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-08-29 15:00:53.408938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 15:00:53.408950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.408958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-08-29 15:00:53.408963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:00:53.408968 | orchestrator | 2025-08-29 15:00:53.408973 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-08-29 15:00:53.408978 | orchestrator | Friday 29 August 2025 14:58:03 +0000 (0:00:04.314) 0:03:51.285 ********* 2025-08-29 15:00:53.409019 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:00:53.409026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.409036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.409044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:00:53.409049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.409097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.409105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.409115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-08-29 15:00:53.409120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.409129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.409135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-08-29 15:00:53.409160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 15:00:53.409167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.409176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 15:00:53.409181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 15:00:53.409186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.409197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 15:00:53.409205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.409271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:00:53.409283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:00:53.409297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.409305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.409313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-08-29 15:00:53.409401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-08-29 15:00:53.409423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 15:00:53.409476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 15:00:53.409489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.409494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.409500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-08-29 15:00:53.409509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-08-29 15:00:53.409553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:00:53.409565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:00:53.409570 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.409575 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.409581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:00:53.409586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.409595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.409601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.409623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-08-29 15:00:53.409628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.409634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 15:00:53.409640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 15:00:53.409649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.409654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:00:53.409671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.409683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-08-29 15:00:53.409688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-08-29 15:00:53.409694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.409701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-08-29 15:00:53.409707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:00:53.409711 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.409720 | orchestrator | 2025-08-29 15:00:53.409725 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-08-29 15:00:53.409731 | orchestrator | Friday 29 August 2025 14:58:05 +0000 (0:00:01.633) 0:03:52.918 ********* 2025-08-29 15:00:53.409736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-08-29 15:00:53.409742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-08-29 15:00:53.409747 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.409765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-08-29 15:00:53.409771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-08-29 15:00:53.409776 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.409781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-08-29 15:00:53.409786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-08-29 15:00:53.409791 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.409835 | orchestrator | 2025-08-29 15:00:53.409842 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-08-29 15:00:53.409847 | orchestrator | Friday 29 August 2025 14:58:07 +0000 (0:00:02.114) 0:03:55.033 ********* 2025-08-29 15:00:53.409852 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:53.409856 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:53.409861 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:53.409866 | orchestrator | 2025-08-29 15:00:53.409871 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-08-29 15:00:53.409876 | orchestrator | Friday 29 August 2025 14:58:08 +0000 (0:00:01.335) 0:03:56.368 ********* 2025-08-29 15:00:53.409890 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:53.409895 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:53.409900 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:53.409905 | orchestrator | 2025-08-29 15:00:53.409910 | orchestrator | TASK [include_role : placement] ************************************************ 2025-08-29 15:00:53.409914 | orchestrator | Friday 29 August 2025 14:58:10 +0000 (0:00:02.296) 0:03:58.665 ********* 2025-08-29 15:00:53.409919 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:53.409924 | orchestrator | 2025-08-29 15:00:53.409929 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-08-29 15:00:53.409934 | orchestrator | Friday 29 August 2025 14:58:12 +0000 (0:00:01.238) 0:03:59.903 ********* 2025-08-29 15:00:53.409942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 15:00:53.409953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 15:00:53.409975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 15:00:53.409981 | orchestrator | 2025-08-29 15:00:53.409986 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-08-29 15:00:53.409991 | orchestrator | Friday 29 August 2025 14:58:15 +0000 (0:00:03.849) 0:04:03.753 ********* 2025-08-29 15:00:53.409996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 15:00:53.410001 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.410006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 15:00:53.410037 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.410046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 15:00:53.410052 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.410057 | orchestrator | 2025-08-29 15:00:53.410061 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-08-29 15:00:53.410066 | orchestrator | Friday 29 August 2025 14:58:16 +0000 (0:00:00.556) 0:04:04.309 ********* 2025-08-29 15:00:53.410071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-08-29 15:00:53.410077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-08-29 15:00:53.410083 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.410102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-08-29 15:00:53.410107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-08-29 15:00:53.410113 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.410117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-08-29 15:00:53.410122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-08-29 15:00:53.410127 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.410132 | orchestrator | 2025-08-29 15:00:53.410137 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-08-29 15:00:53.410142 | orchestrator | Friday 29 August 2025 14:58:17 +0000 (0:00:00.850) 0:04:05.160 ********* 2025-08-29 15:00:53.410147 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:53.410152 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:53.410157 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:53.410162 | orchestrator | 2025-08-29 15:00:53.410166 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-08-29 15:00:53.410171 | orchestrator | Friday 29 August 2025 14:58:18 +0000 (0:00:01.389) 0:04:06.550 ********* 2025-08-29 15:00:53.410176 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:53.410181 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:53.410187 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:53.410192 | orchestrator | 2025-08-29 15:00:53.410204 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-08-29 15:00:53.410210 | orchestrator | Friday 29 August 2025 14:58:20 +0000 (0:00:02.253) 0:04:08.803 ********* 2025-08-29 15:00:53.410216 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:53.410222 | orchestrator | 2025-08-29 15:00:53.410227 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-08-29 15:00:53.410232 | orchestrator | Friday 29 August 2025 14:58:22 +0000 (0:00:01.552) 0:04:10.356 ********* 2025-08-29 15:00:53.410242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 15:00:53.410249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.410270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.410277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 15:00:53.410287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.410297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.410303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 15:00:53.410322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.410328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.410334 | orchestrator | 2025-08-29 15:00:53.410340 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-08-29 15:00:53.410349 | orchestrator | Friday 29 August 2025 14:58:27 +0000 (0:00:04.513) 0:04:14.870 ********* 2025-08-29 15:00:53.410356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 15:00:53.410366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.410372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.410377 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.410396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 15:00:53.410402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.410412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.410418 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.410426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 15:00:53.410432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.410449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.410455 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.410460 | orchestrator | 2025-08-29 15:00:53.410466 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-08-29 15:00:53.410471 | orchestrator | Friday 29 August 2025 14:58:28 +0000 (0:00:01.295) 0:04:16.165 ********* 2025-08-29 15:00:53.410477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-08-29 15:00:53.410486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-08-29 15:00:53.410492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-08-29 15:00:53.410497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-08-29 15:00:53.410503 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.410508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-08-29 15:00:53.410514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-08-29 15:00:53.410519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-08-29 15:00:53.410526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-08-29 15:00:53.410531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-08-29 15:00:53.410538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-08-29 15:00:53.410543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-08-29 15:00:53.410548 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.410553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-08-29 15:00:53.410558 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.410562 | orchestrator | 2025-08-29 15:00:53.410567 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-08-29 15:00:53.410572 | orchestrator | Friday 29 August 2025 14:58:29 +0000 (0:00:00.949) 0:04:17.115 ********* 2025-08-29 15:00:53.410576 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:53.410581 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:53.410585 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:53.410590 | orchestrator | 2025-08-29 15:00:53.410595 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-08-29 15:00:53.410599 | orchestrator | Friday 29 August 2025 14:58:30 +0000 (0:00:01.451) 0:04:18.567 ********* 2025-08-29 15:00:53.410604 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:53.410609 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:53.410613 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:53.410618 | orchestrator | 2025-08-29 15:00:53.410622 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-08-29 15:00:53.410631 | orchestrator | Friday 29 August 2025 14:58:32 +0000 (0:00:02.224) 0:04:20.791 ********* 2025-08-29 15:00:53.410636 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:53.410640 | orchestrator | 2025-08-29 15:00:53.410645 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-08-29 15:00:53.410662 | orchestrator | Friday 29 August 2025 14:58:34 +0000 (0:00:01.588) 0:04:22.380 ********* 2025-08-29 15:00:53.410668 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-08-29 15:00:53.410672 | orchestrator | 2025-08-29 15:00:53.410677 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-08-29 15:00:53.410682 | orchestrator | Friday 29 August 2025 14:58:35 +0000 (0:00:00.885) 0:04:23.265 ********* 2025-08-29 15:00:53.410687 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-08-29 15:00:53.410692 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-08-29 15:00:53.410697 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-08-29 15:00:53.410702 | orchestrator | 2025-08-29 15:00:53.410706 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-08-29 15:00:53.410711 | orchestrator | Friday 29 August 2025 14:58:39 +0000 (0:00:04.148) 0:04:27.414 ********* 2025-08-29 15:00:53.410719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 15:00:53.410724 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.410729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 15:00:53.410734 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.410738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 15:00:53.410746 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.410751 | orchestrator | 2025-08-29 15:00:53.410755 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-08-29 15:00:53.410760 | orchestrator | Friday 29 August 2025 14:58:41 +0000 (0:00:01.536) 0:04:28.951 ********* 2025-08-29 15:00:53.410776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-08-29 15:00:53.410782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-08-29 15:00:53.410787 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.410792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-08-29 15:00:53.410811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-08-29 15:00:53.410816 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.410821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-08-29 15:00:53.410826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-08-29 15:00:53.410830 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.410835 | orchestrator | 2025-08-29 15:00:53.410840 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-08-29 15:00:53.410844 | orchestrator | Friday 29 August 2025 14:58:42 +0000 (0:00:01.571) 0:04:30.522 ********* 2025-08-29 15:00:53.410849 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:53.410854 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:53.410858 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:53.410863 | orchestrator | 2025-08-29 15:00:53.410867 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-08-29 15:00:53.410872 | orchestrator | Friday 29 August 2025 14:58:45 +0000 (0:00:02.456) 0:04:32.979 ********* 2025-08-29 15:00:53.410876 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:53.410881 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:53.410885 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:53.410890 | orchestrator | 2025-08-29 15:00:53.410895 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-08-29 15:00:53.410899 | orchestrator | Friday 29 August 2025 14:58:48 +0000 (0:00:03.162) 0:04:36.141 ********* 2025-08-29 15:00:53.410904 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-08-29 15:00:53.410909 | orchestrator | 2025-08-29 15:00:53.410914 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-08-29 15:00:53.410922 | orchestrator | Friday 29 August 2025 14:58:49 +0000 (0:00:01.393) 0:04:37.535 ********* 2025-08-29 15:00:53.410930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 15:00:53.410935 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.410940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 15:00:53.410945 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.410964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 15:00:53.410970 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.410974 | orchestrator | 2025-08-29 15:00:53.410979 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-08-29 15:00:53.410984 | orchestrator | Friday 29 August 2025 14:58:50 +0000 (0:00:01.276) 0:04:38.811 ********* 2025-08-29 15:00:53.410989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 15:00:53.410993 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.410998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 15:00:53.411003 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.411008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-08-29 15:00:53.411019 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.411024 | orchestrator | 2025-08-29 15:00:53.411029 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-08-29 15:00:53.411033 | orchestrator | Friday 29 August 2025 14:58:52 +0000 (0:00:01.315) 0:04:40.127 ********* 2025-08-29 15:00:53.411038 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.411043 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.411047 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.411052 | orchestrator | 2025-08-29 15:00:53.411056 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-08-29 15:00:53.411061 | orchestrator | Friday 29 August 2025 14:58:54 +0000 (0:00:01.803) 0:04:41.930 ********* 2025-08-29 15:00:53.411066 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:53.411071 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:53.411075 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:53.411080 | orchestrator | 2025-08-29 15:00:53.411087 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-08-29 15:00:53.411092 | orchestrator | Friday 29 August 2025 14:58:56 +0000 (0:00:02.526) 0:04:44.456 ********* 2025-08-29 15:00:53.411097 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:53.411101 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:53.411106 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:53.411110 | orchestrator | 2025-08-29 15:00:53.411115 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-08-29 15:00:53.411120 | orchestrator | Friday 29 August 2025 14:58:59 +0000 (0:00:02.824) 0:04:47.281 ********* 2025-08-29 15:00:53.411124 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-08-29 15:00:53.411129 | orchestrator | 2025-08-29 15:00:53.411134 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-08-29 15:00:53.411138 | orchestrator | Friday 29 August 2025 14:59:00 +0000 (0:00:01.195) 0:04:48.476 ********* 2025-08-29 15:00:53.411143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-08-29 15:00:53.411148 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.411165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-08-29 15:00:53.411171 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.411175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-08-29 15:00:53.411180 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.411185 | orchestrator | 2025-08-29 15:00:53.411194 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-08-29 15:00:53.411198 | orchestrator | Friday 29 August 2025 14:59:01 +0000 (0:00:01.308) 0:04:49.784 ********* 2025-08-29 15:00:53.411203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-08-29 15:00:53.411208 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.411213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-08-29 15:00:53.411218 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.411226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-08-29 15:00:53.411231 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.411236 | orchestrator | 2025-08-29 15:00:53.411241 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-08-29 15:00:53.411245 | orchestrator | Friday 29 August 2025 14:59:03 +0000 (0:00:01.333) 0:04:51.118 ********* 2025-08-29 15:00:53.411250 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.411255 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.411259 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.411264 | orchestrator | 2025-08-29 15:00:53.411268 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-08-29 15:00:53.411273 | orchestrator | Friday 29 August 2025 14:59:04 +0000 (0:00:01.571) 0:04:52.689 ********* 2025-08-29 15:00:53.411278 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:53.411282 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:53.411287 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:53.411291 | orchestrator | 2025-08-29 15:00:53.411296 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-08-29 15:00:53.411300 | orchestrator | Friday 29 August 2025 14:59:07 +0000 (0:00:02.638) 0:04:55.328 ********* 2025-08-29 15:00:53.411305 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:53.411310 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:53.411314 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:53.411319 | orchestrator | 2025-08-29 15:00:53.411323 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-08-29 15:00:53.411328 | orchestrator | Friday 29 August 2025 14:59:10 +0000 (0:00:03.325) 0:04:58.653 ********* 2025-08-29 15:00:53.411333 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:53.411337 | orchestrator | 2025-08-29 15:00:53.411342 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-08-29 15:00:53.411347 | orchestrator | Friday 29 August 2025 14:59:12 +0000 (0:00:01.662) 0:05:00.315 ********* 2025-08-29 15:00:53.411364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 15:00:53.411375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 15:00:53.411381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 15:00:53.411386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 15:00:53.411393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.411398 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 15:00:53.411418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 15:00:53.411424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 15:00:53.411429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 15:00:53.411433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.411441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 15:00:53.411446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 15:00:53.411469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 15:00:53.411475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 15:00:53.411480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.411484 | orchestrator | 2025-08-29 15:00:53.411489 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-08-29 15:00:53.411494 | orchestrator | Friday 29 August 2025 14:59:16 +0000 (0:00:03.580) 0:05:03.896 ********* 2025-08-29 15:00:53.411499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 15:00:53.411506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 15:00:53.411511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 15:00:53.411532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 15:00:53.411537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.411542 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.411547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 15:00:53.411552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 15:00:53.411571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 15:00:53.411576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 15:00:53.411584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.411600 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.411606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 15:00:53.411611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 15:00:53.411616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 15:00:53.411623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 15:00:53.411628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:00:53.411636 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.411641 | orchestrator | 2025-08-29 15:00:53.411646 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-08-29 15:00:53.411650 | orchestrator | Friday 29 August 2025 14:59:16 +0000 (0:00:00.769) 0:05:04.666 ********* 2025-08-29 15:00:53.411655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-08-29 15:00:53.411660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-08-29 15:00:53.411668 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.411685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-08-29 15:00:53.411690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-08-29 15:00:53.411695 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.411700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-08-29 15:00:53.411705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-08-29 15:00:53.411709 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.411714 | orchestrator | 2025-08-29 15:00:53.411719 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-08-29 15:00:53.411723 | orchestrator | Friday 29 August 2025 14:59:18 +0000 (0:00:01.271) 0:05:05.937 ********* 2025-08-29 15:00:53.411728 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:53.411733 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:53.411740 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:53.411747 | orchestrator | 2025-08-29 15:00:53.411754 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-08-29 15:00:53.411761 | orchestrator | Friday 29 August 2025 14:59:19 +0000 (0:00:01.592) 0:05:07.530 ********* 2025-08-29 15:00:53.411768 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:53.411775 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:53.411782 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:53.411789 | orchestrator | 2025-08-29 15:00:53.411811 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-08-29 15:00:53.411818 | orchestrator | Friday 29 August 2025 14:59:21 +0000 (0:00:02.217) 0:05:09.747 ********* 2025-08-29 15:00:53.411822 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:53.411827 | orchestrator | 2025-08-29 15:00:53.411831 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-08-29 15:00:53.411836 | orchestrator | Friday 29 August 2025 14:59:23 +0000 (0:00:01.365) 0:05:11.112 ********* 2025-08-29 15:00:53.411846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 15:00:53.411856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 15:00:53.411877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 15:00:53.411883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 15:00:53.411889 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 15:00:53.411901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 15:00:53.411907 | orchestrator | 2025-08-29 15:00:53.411912 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-08-29 15:00:53.411916 | orchestrator | Friday 29 August 2025 14:59:28 +0000 (0:00:05.696) 0:05:16.808 ********* 2025-08-29 15:00:53.411933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 15:00:53.411939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 15:00:53.411944 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.411949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 15:00:53.411960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 15:00:53.411965 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.411981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 15:00:53.411986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 15:00:53.411991 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.411996 | orchestrator | 2025-08-29 15:00:53.412000 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-08-29 15:00:53.412005 | orchestrator | Friday 29 August 2025 14:59:29 +0000 (0:00:00.681) 0:05:17.490 ********* 2025-08-29 15:00:53.412010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-08-29 15:00:53.412019 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-08-29 15:00:53.412024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-08-29 15:00:53.412029 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.412034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-08-29 15:00:53.412039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-08-29 15:00:53.412046 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-08-29 15:00:53.412051 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.412057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-08-29 15:00:53.412061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-08-29 15:00:53.412066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-08-29 15:00:53.412071 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.412075 | orchestrator | 2025-08-29 15:00:53.412080 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-08-29 15:00:53.412085 | orchestrator | Friday 29 August 2025 14:59:30 +0000 (0:00:00.939) 0:05:18.429 ********* 2025-08-29 15:00:53.412089 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.412094 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.412098 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.412103 | orchestrator | 2025-08-29 15:00:53.412107 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-08-29 15:00:53.412112 | orchestrator | Friday 29 August 2025 14:59:31 +0000 (0:00:00.853) 0:05:19.283 ********* 2025-08-29 15:00:53.412116 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.412121 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.412126 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.412130 | orchestrator | 2025-08-29 15:00:53.412148 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-08-29 15:00:53.412153 | orchestrator | Friday 29 August 2025 14:59:32 +0000 (0:00:01.421) 0:05:20.705 ********* 2025-08-29 15:00:53.412157 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:53.412162 | orchestrator | 2025-08-29 15:00:53.412167 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-08-29 15:00:53.412171 | orchestrator | Friday 29 August 2025 14:59:34 +0000 (0:00:01.413) 0:05:22.118 ********* 2025-08-29 15:00:53.412176 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-08-29 15:00:53.412186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 15:00:53.412191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-08-29 15:00:53.412199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:00:53.412204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 15:00:53.412209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:00:53.412227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:00:53.412233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 15:00:53.412241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:00:53.412246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 15:00:53.412254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-08-29 15:00:53.412259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 15:00:53.412264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:00:53.412272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:00:53.412277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 15:00:53.412286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-08-29 15:00:53.412291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-08-29 15:00:53.412300 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-08-29 15:00:53.412305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:00:53.412314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-08-29 15:00:53.412323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:00:53.412328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:00:53.412333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 15:00:53.412342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:00:53.412347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 15:00:53.412354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-08-29 15:00:53.412363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-08-29 15:00:53.412368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:00:53.412373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:00:53.412377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 15:00:53.412382 | orchestrator | 2025-08-29 15:00:53.412387 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-08-29 15:00:53.412392 | orchestrator | Friday 29 August 2025 14:59:38 +0000 (0:00:04.468) 0:05:26.587 ********* 2025-08-29 15:00:53.412436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-08-29 15:00:53.412450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 15:00:53.412465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:00:53.412471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:00:53.412475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 15:00:53.412480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-08-29 15:00:53.412489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-08-29 15:00:53.412494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:00:53.412505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:00:53.412510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 15:00:53.412515 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.412520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-08-29 15:00:53.412525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 15:00:53.412530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:00:53.412537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:00:53.412542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 15:00:53.412553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-08-29 15:00:53.412559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-08-29 15:00:53.412564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-08-29 15:00:53.412571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 15:00:53.412576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:00:53.412581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:00:53.412589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:00:53.412598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:00:53.412603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 15:00:53.412608 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.412612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 15:00:53.412618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-08-29 15:00:53.412625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-08-29 15:00:53.412634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:00:53.412641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:00:53.412646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 15:00:53.412651 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.412655 | orchestrator | 2025-08-29 15:00:53.412660 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-08-29 15:00:53.412665 | orchestrator | Friday 29 August 2025 14:59:40 +0000 (0:00:01.286) 0:05:27.874 ********* 2025-08-29 15:00:53.412669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-08-29 15:00:53.412674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-08-29 15:00:53.412679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-08-29 15:00:53.412685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-08-29 15:00:53.412690 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.412695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-08-29 15:00:53.412700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-08-29 15:00:53.412710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-08-29 15:00:53.412716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-08-29 15:00:53.412720 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.412725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-08-29 15:00:53.412730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-08-29 15:00:53.412734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-08-29 15:00:53.412742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-08-29 15:00:53.412747 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.412751 | orchestrator | 2025-08-29 15:00:53.412756 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-08-29 15:00:53.412761 | orchestrator | Friday 29 August 2025 14:59:41 +0000 (0:00:01.139) 0:05:29.013 ********* 2025-08-29 15:00:53.412765 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.412770 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.412775 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.412779 | orchestrator | 2025-08-29 15:00:53.412784 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-08-29 15:00:53.412788 | orchestrator | Friday 29 August 2025 14:59:41 +0000 (0:00:00.446) 0:05:29.460 ********* 2025-08-29 15:00:53.412793 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.412843 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.412852 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.412859 | orchestrator | 2025-08-29 15:00:53.412867 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-08-29 15:00:53.412874 | orchestrator | Friday 29 August 2025 14:59:43 +0000 (0:00:01.463) 0:05:30.923 ********* 2025-08-29 15:00:53.412881 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:53.412889 | orchestrator | 2025-08-29 15:00:53.412894 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-08-29 15:00:53.412899 | orchestrator | Friday 29 August 2025 14:59:44 +0000 (0:00:01.788) 0:05:32.712 ********* 2025-08-29 15:00:53.412904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 15:00:53.412918 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 15:00:53.412923 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-08-29 15:00:53.412928 | orchestrator | 2025-08-29 15:00:53.412936 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-08-29 15:00:53.412941 | orchestrator | Friday 29 August 2025 14:59:47 +0000 (0:00:02.885) 0:05:35.598 ********* 2025-08-29 15:00:53.412946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-08-29 15:00:53.412951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-08-29 15:00:53.412961 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.412966 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.412974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-08-29 15:00:53.412979 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.412984 | orchestrator | 2025-08-29 15:00:53.412988 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-08-29 15:00:53.412993 | orchestrator | Friday 29 August 2025 14:59:48 +0000 (0:00:00.447) 0:05:36.045 ********* 2025-08-29 15:00:53.412998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-08-29 15:00:53.413003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-08-29 15:00:53.413008 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.413012 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.413017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-08-29 15:00:53.413022 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.413026 | orchestrator | 2025-08-29 15:00:53.413030 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-08-29 15:00:53.413035 | orchestrator | Friday 29 August 2025 14:59:49 +0000 (0:00:01.046) 0:05:37.092 ********* 2025-08-29 15:00:53.413042 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.413047 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.413051 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.413056 | orchestrator | 2025-08-29 15:00:53.413060 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-08-29 15:00:53.413065 | orchestrator | Friday 29 August 2025 14:59:49 +0000 (0:00:00.446) 0:05:37.539 ********* 2025-08-29 15:00:53.413070 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.413074 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.413079 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.413083 | orchestrator | 2025-08-29 15:00:53.413088 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-08-29 15:00:53.413093 | orchestrator | Friday 29 August 2025 14:59:51 +0000 (0:00:01.468) 0:05:39.008 ********* 2025-08-29 15:00:53.413097 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:00:53.413105 | orchestrator | 2025-08-29 15:00:53.413109 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-08-29 15:00:53.413114 | orchestrator | Friday 29 August 2025 14:59:53 +0000 (0:00:01.976) 0:05:40.984 ********* 2025-08-29 15:00:53.413119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-08-29 15:00:53.413127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-08-29 15:00:53.413132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-08-29 15:00:53.413140 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-08-29 15:00:53.413146 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-08-29 15:00:53.413155 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-08-29 15:00:53.413160 | orchestrator | 2025-08-29 15:00:53.413165 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-08-29 15:00:53.413169 | orchestrator | Friday 29 August 2025 14:59:59 +0000 (0:00:06.111) 0:05:47.095 ********* 2025-08-29 15:00:53.413177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-08-29 15:00:53.413185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-08-29 15:00:53.413190 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.413202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-08-29 15:00:53.413210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-08-29 15:00:53.413217 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.413229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-08-29 15:00:53.413237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-08-29 15:00:53.413246 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.413253 | orchestrator | 2025-08-29 15:00:53.413261 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-08-29 15:00:53.413278 | orchestrator | Friday 29 August 2025 14:59:59 +0000 (0:00:00.643) 0:05:47.739 ********* 2025-08-29 15:00:53.413284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-08-29 15:00:53.413289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-08-29 15:00:53.413293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-08-29 15:00:53.413298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-08-29 15:00:53.413303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-08-29 15:00:53.413308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-08-29 15:00:53.413312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-08-29 15:00:53.413317 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.413322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-08-29 15:00:53.413326 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.413331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-08-29 15:00:53.413335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-08-29 15:00:53.413340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-08-29 15:00:53.413347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-08-29 15:00:53.413351 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.413355 | orchestrator | 2025-08-29 15:00:53.413359 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-08-29 15:00:53.413363 | orchestrator | Friday 29 August 2025 15:00:01 +0000 (0:00:01.369) 0:05:49.109 ********* 2025-08-29 15:00:53.413367 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:53.413372 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:53.413376 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:53.413380 | orchestrator | 2025-08-29 15:00:53.413384 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-08-29 15:00:53.413388 | orchestrator | Friday 29 August 2025 15:00:02 +0000 (0:00:01.547) 0:05:50.656 ********* 2025-08-29 15:00:53.413392 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:53.413399 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:53.413404 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:53.413408 | orchestrator | 2025-08-29 15:00:53.413412 | orchestrator | TASK [include_role : swift] **************************************************** 2025-08-29 15:00:53.413416 | orchestrator | Friday 29 August 2025 15:00:05 +0000 (0:00:02.253) 0:05:52.910 ********* 2025-08-29 15:00:53.413420 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.413424 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.413428 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.413432 | orchestrator | 2025-08-29 15:00:53.413436 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-08-29 15:00:53.413440 | orchestrator | Friday 29 August 2025 15:00:05 +0000 (0:00:00.347) 0:05:53.258 ********* 2025-08-29 15:00:53.413444 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.413448 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.413453 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.413457 | orchestrator | 2025-08-29 15:00:53.413461 | orchestrator | TASK [include_role : trove] **************************************************** 2025-08-29 15:00:53.413467 | orchestrator | Friday 29 August 2025 15:00:05 +0000 (0:00:00.360) 0:05:53.618 ********* 2025-08-29 15:00:53.413471 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.413475 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.413479 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.413484 | orchestrator | 2025-08-29 15:00:53.413488 | orchestrator | TASK [include_role : venus] **************************************************** 2025-08-29 15:00:53.413492 | orchestrator | Friday 29 August 2025 15:00:06 +0000 (0:00:00.338) 0:05:53.957 ********* 2025-08-29 15:00:53.413496 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.413500 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.413504 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.413508 | orchestrator | 2025-08-29 15:00:53.413512 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-08-29 15:00:53.413516 | orchestrator | Friday 29 August 2025 15:00:06 +0000 (0:00:00.719) 0:05:54.676 ********* 2025-08-29 15:00:53.413520 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.413524 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.413528 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.413532 | orchestrator | 2025-08-29 15:00:53.413536 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-08-29 15:00:53.413540 | orchestrator | Friday 29 August 2025 15:00:07 +0000 (0:00:00.326) 0:05:55.003 ********* 2025-08-29 15:00:53.413545 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.413549 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.413553 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.413557 | orchestrator | 2025-08-29 15:00:53.413561 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-08-29 15:00:53.413565 | orchestrator | Friday 29 August 2025 15:00:07 +0000 (0:00:00.585) 0:05:55.588 ********* 2025-08-29 15:00:53.413569 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:53.413573 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:53.413577 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:53.413582 | orchestrator | 2025-08-29 15:00:53.413586 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-08-29 15:00:53.413590 | orchestrator | Friday 29 August 2025 15:00:08 +0000 (0:00:01.119) 0:05:56.707 ********* 2025-08-29 15:00:53.413594 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:53.413598 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:53.413602 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:53.413606 | orchestrator | 2025-08-29 15:00:53.413610 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-08-29 15:00:53.413614 | orchestrator | Friday 29 August 2025 15:00:09 +0000 (0:00:00.349) 0:05:57.056 ********* 2025-08-29 15:00:53.413618 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:53.413623 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:53.413630 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:53.413635 | orchestrator | 2025-08-29 15:00:53.413639 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-08-29 15:00:53.413643 | orchestrator | Friday 29 August 2025 15:00:10 +0000 (0:00:00.867) 0:05:57.924 ********* 2025-08-29 15:00:53.413647 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:53.413651 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:53.413655 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:53.413659 | orchestrator | 2025-08-29 15:00:53.413663 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-08-29 15:00:53.413668 | orchestrator | Friday 29 August 2025 15:00:10 +0000 (0:00:00.880) 0:05:58.805 ********* 2025-08-29 15:00:53.413672 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:53.413676 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:53.413680 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:53.413684 | orchestrator | 2025-08-29 15:00:53.413688 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-08-29 15:00:53.413692 | orchestrator | Friday 29 August 2025 15:00:12 +0000 (0:00:01.296) 0:06:00.102 ********* 2025-08-29 15:00:53.413696 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:53.413701 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:53.413705 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:53.413709 | orchestrator | 2025-08-29 15:00:53.413716 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-08-29 15:00:53.413720 | orchestrator | Friday 29 August 2025 15:00:22 +0000 (0:00:09.903) 0:06:10.005 ********* 2025-08-29 15:00:53.413724 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:53.413728 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:53.413732 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:53.413736 | orchestrator | 2025-08-29 15:00:53.413740 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-08-29 15:00:53.413744 | orchestrator | Friday 29 August 2025 15:00:23 +0000 (0:00:00.826) 0:06:10.832 ********* 2025-08-29 15:00:53.413749 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:53.413753 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:53.413757 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:53.413761 | orchestrator | 2025-08-29 15:00:53.413765 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-08-29 15:00:53.413769 | orchestrator | Friday 29 August 2025 15:00:32 +0000 (0:00:09.534) 0:06:20.366 ********* 2025-08-29 15:00:53.413773 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:53.413777 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:53.413781 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:53.413785 | orchestrator | 2025-08-29 15:00:53.413790 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-08-29 15:00:53.413794 | orchestrator | Friday 29 August 2025 15:00:36 +0000 (0:00:04.129) 0:06:24.496 ********* 2025-08-29 15:00:53.413816 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:00:53.413822 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:00:53.413826 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:00:53.413830 | orchestrator | 2025-08-29 15:00:53.413834 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-08-29 15:00:53.413839 | orchestrator | Friday 29 August 2025 15:00:45 +0000 (0:00:09.261) 0:06:33.757 ********* 2025-08-29 15:00:53.413843 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.413847 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.413851 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.413855 | orchestrator | 2025-08-29 15:00:53.413859 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-08-29 15:00:53.413863 | orchestrator | Friday 29 August 2025 15:00:46 +0000 (0:00:00.404) 0:06:34.162 ********* 2025-08-29 15:00:53.413867 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.413874 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.413878 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.413883 | orchestrator | 2025-08-29 15:00:53.413891 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-08-29 15:00:53.413895 | orchestrator | Friday 29 August 2025 15:00:46 +0000 (0:00:00.402) 0:06:34.564 ********* 2025-08-29 15:00:53.413900 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.413904 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.413908 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.413912 | orchestrator | 2025-08-29 15:00:53.413916 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-08-29 15:00:53.413920 | orchestrator | Friday 29 August 2025 15:00:47 +0000 (0:00:00.690) 0:06:35.255 ********* 2025-08-29 15:00:53.413924 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.413928 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.413933 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.413937 | orchestrator | 2025-08-29 15:00:53.413941 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-08-29 15:00:53.413945 | orchestrator | Friday 29 August 2025 15:00:47 +0000 (0:00:00.338) 0:06:35.594 ********* 2025-08-29 15:00:53.413949 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.413953 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.413957 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.413961 | orchestrator | 2025-08-29 15:00:53.413965 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-08-29 15:00:53.413969 | orchestrator | Friday 29 August 2025 15:00:48 +0000 (0:00:00.415) 0:06:36.009 ********* 2025-08-29 15:00:53.413973 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:00:53.413977 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:00:53.413981 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:00:53.413986 | orchestrator | 2025-08-29 15:00:53.413990 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-08-29 15:00:53.413994 | orchestrator | Friday 29 August 2025 15:00:48 +0000 (0:00:00.357) 0:06:36.367 ********* 2025-08-29 15:00:53.413998 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:53.414002 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:53.414006 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:53.414010 | orchestrator | 2025-08-29 15:00:53.414037 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-08-29 15:00:53.414043 | orchestrator | Friday 29 August 2025 15:00:49 +0000 (0:00:01.300) 0:06:37.667 ********* 2025-08-29 15:00:53.414047 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:00:53.414051 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:00:53.414055 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:00:53.414059 | orchestrator | 2025-08-29 15:00:53.414064 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:00:53.414068 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-08-29 15:00:53.414072 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-08-29 15:00:53.414077 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-08-29 15:00:53.414081 | orchestrator | 2025-08-29 15:00:53.414085 | orchestrator | 2025-08-29 15:00:53.414089 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:00:53.414093 | orchestrator | Friday 29 August 2025 15:00:50 +0000 (0:00:00.885) 0:06:38.553 ********* 2025-08-29 15:00:53.414097 | orchestrator | =============================================================================== 2025-08-29 15:00:53.414104 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 9.90s 2025-08-29 15:00:53.414109 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 9.53s 2025-08-29 15:00:53.414113 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.26s 2025-08-29 15:00:53.414117 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 6.99s 2025-08-29 15:00:53.414128 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.11s 2025-08-29 15:00:53.414132 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.70s 2025-08-29 15:00:53.414136 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 5.15s 2025-08-29 15:00:53.414140 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 5.03s 2025-08-29 15:00:53.414144 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 4.85s 2025-08-29 15:00:53.414148 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 4.52s 2025-08-29 15:00:53.414152 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.51s 2025-08-29 15:00:53.414157 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.47s 2025-08-29 15:00:53.414161 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.39s 2025-08-29 15:00:53.414165 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.38s 2025-08-29 15:00:53.414169 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.31s 2025-08-29 15:00:53.414173 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.15s 2025-08-29 15:00:53.414177 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 4.13s 2025-08-29 15:00:53.414181 | orchestrator | loadbalancer : Copying over custom haproxy services configuration ------- 4.10s 2025-08-29 15:00:53.414185 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 4.09s 2025-08-29 15:00:53.414189 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 3.99s 2025-08-29 15:00:56.438788 | orchestrator | 2025-08-29 15:00:56 | INFO  | Task f9532f67-2c48-40a8-983e-5cb9fdd5a371 is in state STARTED 2025-08-29 15:00:56.441242 | orchestrator | 2025-08-29 15:00:56 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 15:00:56.443487 | orchestrator | 2025-08-29 15:00:56 | INFO  | Task 2ec287c2-cdb8-481f-80f8-8cd56f96c3e8 is in state STARTED 2025-08-29 15:00:56.443749 | orchestrator | 2025-08-29 15:00:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:00:59.493346 | orchestrator | 2025-08-29 15:00:59 | INFO  | Task f9532f67-2c48-40a8-983e-5cb9fdd5a371 is in state STARTED 2025-08-29 15:00:59.493886 | orchestrator | 2025-08-29 15:00:59 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 15:00:59.494720 | orchestrator | 2025-08-29 15:00:59 | INFO  | Task 2ec287c2-cdb8-481f-80f8-8cd56f96c3e8 is in state STARTED 2025-08-29 15:00:59.494921 | orchestrator | 2025-08-29 15:00:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:02.535174 | orchestrator | 2025-08-29 15:01:02 | INFO  | Task f9532f67-2c48-40a8-983e-5cb9fdd5a371 is in state STARTED 2025-08-29 15:01:02.537501 | orchestrator | 2025-08-29 15:01:02 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 15:01:02.538334 | orchestrator | 2025-08-29 15:01:02 | INFO  | Task 2ec287c2-cdb8-481f-80f8-8cd56f96c3e8 is in state STARTED 2025-08-29 15:01:02.538372 | orchestrator | 2025-08-29 15:01:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:05.566911 | orchestrator | 2025-08-29 15:01:05 | INFO  | Task f9532f67-2c48-40a8-983e-5cb9fdd5a371 is in state STARTED 2025-08-29 15:01:05.567952 | orchestrator | 2025-08-29 15:01:05 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 15:01:05.569560 | orchestrator | 2025-08-29 15:01:05 | INFO  | Task 2ec287c2-cdb8-481f-80f8-8cd56f96c3e8 is in state STARTED 2025-08-29 15:01:05.569591 | orchestrator | 2025-08-29 15:01:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:08.620947 | orchestrator | 2025-08-29 15:01:08 | INFO  | Task f9532f67-2c48-40a8-983e-5cb9fdd5a371 is in state STARTED 2025-08-29 15:01:08.621073 | orchestrator | 2025-08-29 15:01:08 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 15:01:08.621096 | orchestrator | 2025-08-29 15:01:08 | INFO  | Task 2ec287c2-cdb8-481f-80f8-8cd56f96c3e8 is in state STARTED 2025-08-29 15:01:08.621114 | orchestrator | 2025-08-29 15:01:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:11.659245 | orchestrator | 2025-08-29 15:01:11 | INFO  | Task f9532f67-2c48-40a8-983e-5cb9fdd5a371 is in state STARTED 2025-08-29 15:01:11.660201 | orchestrator | 2025-08-29 15:01:11 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 15:01:11.662233 | orchestrator | 2025-08-29 15:01:11 | INFO  | Task 2ec287c2-cdb8-481f-80f8-8cd56f96c3e8 is in state STARTED 2025-08-29 15:01:11.662265 | orchestrator | 2025-08-29 15:01:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:14.699076 | orchestrator | 2025-08-29 15:01:14 | INFO  | Task f9532f67-2c48-40a8-983e-5cb9fdd5a371 is in state STARTED 2025-08-29 15:01:14.702831 | orchestrator | 2025-08-29 15:01:14 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 15:01:14.704190 | orchestrator | 2025-08-29 15:01:14 | INFO  | Task 2ec287c2-cdb8-481f-80f8-8cd56f96c3e8 is in state STARTED 2025-08-29 15:01:14.704216 | orchestrator | 2025-08-29 15:01:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:17.753882 | orchestrator | 2025-08-29 15:01:17 | INFO  | Task f9532f67-2c48-40a8-983e-5cb9fdd5a371 is in state STARTED 2025-08-29 15:01:17.755037 | orchestrator | 2025-08-29 15:01:17 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 15:01:17.755087 | orchestrator | 2025-08-29 15:01:17 | INFO  | Task 2ec287c2-cdb8-481f-80f8-8cd56f96c3e8 is in state STARTED 2025-08-29 15:01:17.755102 | orchestrator | 2025-08-29 15:01:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:20.788787 | orchestrator | 2025-08-29 15:01:20 | INFO  | Task f9532f67-2c48-40a8-983e-5cb9fdd5a371 is in state STARTED 2025-08-29 15:01:20.791249 | orchestrator | 2025-08-29 15:01:20 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 15:01:20.793692 | orchestrator | 2025-08-29 15:01:20 | INFO  | Task 2ec287c2-cdb8-481f-80f8-8cd56f96c3e8 is in state STARTED 2025-08-29 15:01:20.793724 | orchestrator | 2025-08-29 15:01:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:23.853097 | orchestrator | 2025-08-29 15:01:23 | INFO  | Task f9532f67-2c48-40a8-983e-5cb9fdd5a371 is in state STARTED 2025-08-29 15:01:23.857013 | orchestrator | 2025-08-29 15:01:23 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 15:01:23.860586 | orchestrator | 2025-08-29 15:01:23 | INFO  | Task 2ec287c2-cdb8-481f-80f8-8cd56f96c3e8 is in state STARTED 2025-08-29 15:01:23.861152 | orchestrator | 2025-08-29 15:01:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:26.921226 | orchestrator | 2025-08-29 15:01:26 | INFO  | Task f9532f67-2c48-40a8-983e-5cb9fdd5a371 is in state STARTED 2025-08-29 15:01:26.921566 | orchestrator | 2025-08-29 15:01:26 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 15:01:26.924241 | orchestrator | 2025-08-29 15:01:26 | INFO  | Task 2ec287c2-cdb8-481f-80f8-8cd56f96c3e8 is in state STARTED 2025-08-29 15:01:26.924314 | orchestrator | 2025-08-29 15:01:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:29.973219 | orchestrator | 2025-08-29 15:01:29 | INFO  | Task f9532f67-2c48-40a8-983e-5cb9fdd5a371 is in state STARTED 2025-08-29 15:01:29.975711 | orchestrator | 2025-08-29 15:01:29 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 15:01:29.980167 | orchestrator | 2025-08-29 15:01:29 | INFO  | Task 2ec287c2-cdb8-481f-80f8-8cd56f96c3e8 is in state STARTED 2025-08-29 15:01:29.980242 | orchestrator | 2025-08-29 15:01:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:33.028357 | orchestrator | 2025-08-29 15:01:33 | INFO  | Task f9532f67-2c48-40a8-983e-5cb9fdd5a371 is in state STARTED 2025-08-29 15:01:33.032616 | orchestrator | 2025-08-29 15:01:33 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 15:01:33.034776 | orchestrator | 2025-08-29 15:01:33 | INFO  | Task 2ec287c2-cdb8-481f-80f8-8cd56f96c3e8 is in state STARTED 2025-08-29 15:01:33.034831 | orchestrator | 2025-08-29 15:01:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:36.075205 | orchestrator | 2025-08-29 15:01:36 | INFO  | Task f9532f67-2c48-40a8-983e-5cb9fdd5a371 is in state STARTED 2025-08-29 15:01:36.076326 | orchestrator | 2025-08-29 15:01:36 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 15:01:36.077975 | orchestrator | 2025-08-29 15:01:36 | INFO  | Task 2ec287c2-cdb8-481f-80f8-8cd56f96c3e8 is in state STARTED 2025-08-29 15:01:36.078082 | orchestrator | 2025-08-29 15:01:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:39.125860 | orchestrator | 2025-08-29 15:01:39 | INFO  | Task f9532f67-2c48-40a8-983e-5cb9fdd5a371 is in state STARTED 2025-08-29 15:01:39.128042 | orchestrator | 2025-08-29 15:01:39 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 15:01:39.130980 | orchestrator | 2025-08-29 15:01:39 | INFO  | Task 2ec287c2-cdb8-481f-80f8-8cd56f96c3e8 is in state STARTED 2025-08-29 15:01:39.131021 | orchestrator | 2025-08-29 15:01:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:42.176573 | orchestrator | 2025-08-29 15:01:42 | INFO  | Task f9532f67-2c48-40a8-983e-5cb9fdd5a371 is in state STARTED 2025-08-29 15:01:42.177393 | orchestrator | 2025-08-29 15:01:42 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 15:01:42.178172 | orchestrator | 2025-08-29 15:01:42 | INFO  | Task 2ec287c2-cdb8-481f-80f8-8cd56f96c3e8 is in state STARTED 2025-08-29 15:01:42.178203 | orchestrator | 2025-08-29 15:01:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:45.218693 | orchestrator | 2025-08-29 15:01:45 | INFO  | Task f9532f67-2c48-40a8-983e-5cb9fdd5a371 is in state STARTED 2025-08-29 15:01:45.220475 | orchestrator | 2025-08-29 15:01:45 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 15:01:45.221866 | orchestrator | 2025-08-29 15:01:45 | INFO  | Task 2ec287c2-cdb8-481f-80f8-8cd56f96c3e8 is in state STARTED 2025-08-29 15:01:45.222166 | orchestrator | 2025-08-29 15:01:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:48.258692 | orchestrator | 2025-08-29 15:01:48 | INFO  | Task f9532f67-2c48-40a8-983e-5cb9fdd5a371 is in state STARTED 2025-08-29 15:01:48.262873 | orchestrator | 2025-08-29 15:01:48 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 15:01:48.267451 | orchestrator | 2025-08-29 15:01:48 | INFO  | Task 2ec287c2-cdb8-481f-80f8-8cd56f96c3e8 is in state STARTED 2025-08-29 15:01:48.267555 | orchestrator | 2025-08-29 15:01:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:51.304316 | orchestrator | 2025-08-29 15:01:51 | INFO  | Task f9532f67-2c48-40a8-983e-5cb9fdd5a371 is in state STARTED 2025-08-29 15:01:51.306850 | orchestrator | 2025-08-29 15:01:51 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 15:01:51.309095 | orchestrator | 2025-08-29 15:01:51 | INFO  | Task 2ec287c2-cdb8-481f-80f8-8cd56f96c3e8 is in state STARTED 2025-08-29 15:01:51.309139 | orchestrator | 2025-08-29 15:01:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:54.355714 | orchestrator | 2025-08-29 15:01:54 | INFO  | Task f9532f67-2c48-40a8-983e-5cb9fdd5a371 is in state STARTED 2025-08-29 15:01:54.357245 | orchestrator | 2025-08-29 15:01:54 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 15:01:54.357982 | orchestrator | 2025-08-29 15:01:54 | INFO  | Task 2ec287c2-cdb8-481f-80f8-8cd56f96c3e8 is in state STARTED 2025-08-29 15:01:54.358067 | orchestrator | 2025-08-29 15:01:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:01:57.414095 | orchestrator | 2025-08-29 15:01:57 | INFO  | Task f9532f67-2c48-40a8-983e-5cb9fdd5a371 is in state STARTED 2025-08-29 15:01:57.417163 | orchestrator | 2025-08-29 15:01:57 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 15:01:57.418537 | orchestrator | 2025-08-29 15:01:57 | INFO  | Task 2ec287c2-cdb8-481f-80f8-8cd56f96c3e8 is in state STARTED 2025-08-29 15:01:57.418591 | orchestrator | 2025-08-29 15:01:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:00.473777 | orchestrator | 2025-08-29 15:02:00 | INFO  | Task f9532f67-2c48-40a8-983e-5cb9fdd5a371 is in state STARTED 2025-08-29 15:02:00.477389 | orchestrator | 2025-08-29 15:02:00 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 15:02:00.477439 | orchestrator | 2025-08-29 15:02:00 | INFO  | Task 2ec287c2-cdb8-481f-80f8-8cd56f96c3e8 is in state STARTED 2025-08-29 15:02:00.477445 | orchestrator | 2025-08-29 15:02:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:03.521406 | orchestrator | 2025-08-29 15:02:03 | INFO  | Task f9532f67-2c48-40a8-983e-5cb9fdd5a371 is in state STARTED 2025-08-29 15:02:03.521983 | orchestrator | 2025-08-29 15:02:03 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 15:02:03.524330 | orchestrator | 2025-08-29 15:02:03 | INFO  | Task 2ec287c2-cdb8-481f-80f8-8cd56f96c3e8 is in state STARTED 2025-08-29 15:02:03.524412 | orchestrator | 2025-08-29 15:02:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:06.568337 | orchestrator | 2025-08-29 15:02:06 | INFO  | Task f9532f67-2c48-40a8-983e-5cb9fdd5a371 is in state STARTED 2025-08-29 15:02:06.571229 | orchestrator | 2025-08-29 15:02:06 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 15:02:06.573832 | orchestrator | 2025-08-29 15:02:06 | INFO  | Task 2ec287c2-cdb8-481f-80f8-8cd56f96c3e8 is in state STARTED 2025-08-29 15:02:06.573893 | orchestrator | 2025-08-29 15:02:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:09.637362 | orchestrator | 2025-08-29 15:02:09 | INFO  | Task f9532f67-2c48-40a8-983e-5cb9fdd5a371 is in state STARTED 2025-08-29 15:02:09.639329 | orchestrator | 2025-08-29 15:02:09 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 15:02:09.641989 | orchestrator | 2025-08-29 15:02:09 | INFO  | Task 2ec287c2-cdb8-481f-80f8-8cd56f96c3e8 is in state STARTED 2025-08-29 15:02:09.642353 | orchestrator | 2025-08-29 15:02:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:12.689309 | orchestrator | 2025-08-29 15:02:12 | INFO  | Task f9532f67-2c48-40a8-983e-5cb9fdd5a371 is in state STARTED 2025-08-29 15:02:12.690238 | orchestrator | 2025-08-29 15:02:12 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 15:02:12.692210 | orchestrator | 2025-08-29 15:02:12 | INFO  | Task 2ec287c2-cdb8-481f-80f8-8cd56f96c3e8 is in state STARTED 2025-08-29 15:02:12.692279 | orchestrator | 2025-08-29 15:02:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:15.741303 | orchestrator | 2025-08-29 15:02:15 | INFO  | Task f9532f67-2c48-40a8-983e-5cb9fdd5a371 is in state STARTED 2025-08-29 15:02:15.744534 | orchestrator | 2025-08-29 15:02:15 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 15:02:15.746171 | orchestrator | 2025-08-29 15:02:15 | INFO  | Task 2ec287c2-cdb8-481f-80f8-8cd56f96c3e8 is in state STARTED 2025-08-29 15:02:15.746514 | orchestrator | 2025-08-29 15:02:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:18.806578 | orchestrator | 2025-08-29 15:02:18 | INFO  | Task f9532f67-2c48-40a8-983e-5cb9fdd5a371 is in state STARTED 2025-08-29 15:02:18.808854 | orchestrator | 2025-08-29 15:02:18 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 15:02:18.812023 | orchestrator | 2025-08-29 15:02:18 | INFO  | Task 2ec287c2-cdb8-481f-80f8-8cd56f96c3e8 is in state STARTED 2025-08-29 15:02:18.812372 | orchestrator | 2025-08-29 15:02:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:21.862884 | orchestrator | 2025-08-29 15:02:21 | INFO  | Task f9532f67-2c48-40a8-983e-5cb9fdd5a371 is in state STARTED 2025-08-29 15:02:21.863679 | orchestrator | 2025-08-29 15:02:21 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 15:02:21.864486 | orchestrator | 2025-08-29 15:02:21 | INFO  | Task 2ec287c2-cdb8-481f-80f8-8cd56f96c3e8 is in state STARTED 2025-08-29 15:02:21.864829 | orchestrator | 2025-08-29 15:02:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:24.908942 | orchestrator | 2025-08-29 15:02:24 | INFO  | Task f9532f67-2c48-40a8-983e-5cb9fdd5a371 is in state STARTED 2025-08-29 15:02:24.909025 | orchestrator | 2025-08-29 15:02:24 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 15:02:24.911684 | orchestrator | 2025-08-29 15:02:24 | INFO  | Task 2ec287c2-cdb8-481f-80f8-8cd56f96c3e8 is in state STARTED 2025-08-29 15:02:24.912009 | orchestrator | 2025-08-29 15:02:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:27.954335 | orchestrator | 2025-08-29 15:02:27 | INFO  | Task f9532f67-2c48-40a8-983e-5cb9fdd5a371 is in state STARTED 2025-08-29 15:02:27.955490 | orchestrator | 2025-08-29 15:02:27 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 15:02:27.956204 | orchestrator | 2025-08-29 15:02:27 | INFO  | Task 2ec287c2-cdb8-481f-80f8-8cd56f96c3e8 is in state STARTED 2025-08-29 15:02:27.956240 | orchestrator | 2025-08-29 15:02:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:31.012222 | orchestrator | 2025-08-29 15:02:31 | INFO  | Task f9532f67-2c48-40a8-983e-5cb9fdd5a371 is in state STARTED 2025-08-29 15:02:31.016617 | orchestrator | 2025-08-29 15:02:31 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 15:02:31.017206 | orchestrator | 2025-08-29 15:02:31 | INFO  | Task 2ec287c2-cdb8-481f-80f8-8cd56f96c3e8 is in state STARTED 2025-08-29 15:02:31.017244 | orchestrator | 2025-08-29 15:02:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:34.061607 | orchestrator | 2025-08-29 15:02:34 | INFO  | Task f9532f67-2c48-40a8-983e-5cb9fdd5a371 is in state STARTED 2025-08-29 15:02:34.064856 | orchestrator | 2025-08-29 15:02:34 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 15:02:34.067240 | orchestrator | 2025-08-29 15:02:34 | INFO  | Task 2ec287c2-cdb8-481f-80f8-8cd56f96c3e8 is in state STARTED 2025-08-29 15:02:34.067324 | orchestrator | 2025-08-29 15:02:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:37.109978 | orchestrator | 2025-08-29 15:02:37 | INFO  | Task f9532f67-2c48-40a8-983e-5cb9fdd5a371 is in state STARTED 2025-08-29 15:02:37.112170 | orchestrator | 2025-08-29 15:02:37 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 15:02:37.114600 | orchestrator | 2025-08-29 15:02:37 | INFO  | Task 2ec287c2-cdb8-481f-80f8-8cd56f96c3e8 is in state STARTED 2025-08-29 15:02:37.114637 | orchestrator | 2025-08-29 15:02:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:40.176401 | orchestrator | 2025-08-29 15:02:40 | INFO  | Task f9532f67-2c48-40a8-983e-5cb9fdd5a371 is in state STARTED 2025-08-29 15:02:40.178271 | orchestrator | 2025-08-29 15:02:40 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 15:02:40.180779 | orchestrator | 2025-08-29 15:02:40 | INFO  | Task 2ec287c2-cdb8-481f-80f8-8cd56f96c3e8 is in state STARTED 2025-08-29 15:02:40.180847 | orchestrator | 2025-08-29 15:02:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:43.230519 | orchestrator | 2025-08-29 15:02:43 | INFO  | Task f9532f67-2c48-40a8-983e-5cb9fdd5a371 is in state STARTED 2025-08-29 15:02:43.231764 | orchestrator | 2025-08-29 15:02:43 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 15:02:43.233015 | orchestrator | 2025-08-29 15:02:43 | INFO  | Task 2ec287c2-cdb8-481f-80f8-8cd56f96c3e8 is in state STARTED 2025-08-29 15:02:43.233072 | orchestrator | 2025-08-29 15:02:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:46.278750 | orchestrator | 2025-08-29 15:02:46 | INFO  | Task f9532f67-2c48-40a8-983e-5cb9fdd5a371 is in state STARTED 2025-08-29 15:02:46.280615 | orchestrator | 2025-08-29 15:02:46 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 15:02:46.283828 | orchestrator | 2025-08-29 15:02:46 | INFO  | Task 2ec287c2-cdb8-481f-80f8-8cd56f96c3e8 is in state STARTED 2025-08-29 15:02:46.283871 | orchestrator | 2025-08-29 15:02:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:49.322601 | orchestrator | 2025-08-29 15:02:49 | INFO  | Task f9532f67-2c48-40a8-983e-5cb9fdd5a371 is in state STARTED 2025-08-29 15:02:49.324919 | orchestrator | 2025-08-29 15:02:49 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 15:02:49.326809 | orchestrator | 2025-08-29 15:02:49 | INFO  | Task 2ec287c2-cdb8-481f-80f8-8cd56f96c3e8 is in state STARTED 2025-08-29 15:02:49.326854 | orchestrator | 2025-08-29 15:02:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:52.368898 | orchestrator | 2025-08-29 15:02:52 | INFO  | Task f9532f67-2c48-40a8-983e-5cb9fdd5a371 is in state STARTED 2025-08-29 15:02:52.368968 | orchestrator | 2025-08-29 15:02:52 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 15:02:52.370596 | orchestrator | 2025-08-29 15:02:52 | INFO  | Task 2ec287c2-cdb8-481f-80f8-8cd56f96c3e8 is in state STARTED 2025-08-29 15:02:52.370733 | orchestrator | 2025-08-29 15:02:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:55.419578 | orchestrator | 2025-08-29 15:02:55 | INFO  | Task f9532f67-2c48-40a8-983e-5cb9fdd5a371 is in state STARTED 2025-08-29 15:02:55.421299 | orchestrator | 2025-08-29 15:02:55 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 15:02:55.423786 | orchestrator | 2025-08-29 15:02:55 | INFO  | Task 2ec287c2-cdb8-481f-80f8-8cd56f96c3e8 is in state STARTED 2025-08-29 15:02:55.423871 | orchestrator | 2025-08-29 15:02:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:02:58.476220 | orchestrator | 2025-08-29 15:02:58 | INFO  | Task f9532f67-2c48-40a8-983e-5cb9fdd5a371 is in state STARTED 2025-08-29 15:02:58.478739 | orchestrator | 2025-08-29 15:02:58 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state STARTED 2025-08-29 15:02:58.480926 | orchestrator | 2025-08-29 15:02:58 | INFO  | Task 2ec287c2-cdb8-481f-80f8-8cd56f96c3e8 is in state STARTED 2025-08-29 15:02:58.480981 | orchestrator | 2025-08-29 15:02:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:01.535218 | orchestrator | 2025-08-29 15:03:01 | INFO  | Task f9532f67-2c48-40a8-983e-5cb9fdd5a371 is in state STARTED 2025-08-29 15:03:01.540066 | orchestrator | 2025-08-29 15:03:01 | INFO  | Task e2994663-351e-45ea-b1da-7ea850e6ec77 is in state SUCCESS 2025-08-29 15:03:01.542402 | orchestrator | 2025-08-29 15:03:01.542443 | orchestrator | 2025-08-29 15:03:01.542461 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-08-29 15:03:01.542466 | orchestrator | 2025-08-29 15:03:01.542470 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-08-29 15:03:01.542475 | orchestrator | Friday 29 August 2025 14:51:12 +0000 (0:00:00.980) 0:00:00.980 ********* 2025-08-29 15:03:01.542482 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:03:01.542490 | orchestrator | 2025-08-29 15:03:01.542495 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-08-29 15:03:01.542501 | orchestrator | Friday 29 August 2025 14:51:13 +0000 (0:00:01.500) 0:00:02.480 ********* 2025-08-29 15:03:01.542508 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.542515 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.542521 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.542527 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:01.542533 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.542538 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.542545 | orchestrator | 2025-08-29 15:03:01.542552 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-08-29 15:03:01.542558 | orchestrator | Friday 29 August 2025 14:51:15 +0000 (0:00:01.734) 0:00:04.215 ********* 2025-08-29 15:03:01.542565 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.542571 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.542578 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.542584 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.542591 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:01.542597 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.542604 | orchestrator | 2025-08-29 15:03:01.542608 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-08-29 15:03:01.542613 | orchestrator | Friday 29 August 2025 14:51:16 +0000 (0:00:00.861) 0:00:05.077 ********* 2025-08-29 15:03:01.542616 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.542620 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.542624 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.542628 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.542632 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:01.542636 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.542640 | orchestrator | 2025-08-29 15:03:01.542643 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-08-29 15:03:01.542647 | orchestrator | Friday 29 August 2025 14:51:17 +0000 (0:00:00.986) 0:00:06.063 ********* 2025-08-29 15:03:01.542651 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.542704 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.542709 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.542712 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.542716 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:01.542720 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.542724 | orchestrator | 2025-08-29 15:03:01.542744 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-08-29 15:03:01.542748 | orchestrator | Friday 29 August 2025 14:51:18 +0000 (0:00:00.938) 0:00:07.002 ********* 2025-08-29 15:03:01.542752 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.542756 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.542759 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.542763 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.542767 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:01.542770 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.542774 | orchestrator | 2025-08-29 15:03:01.542778 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-08-29 15:03:01.542782 | orchestrator | Friday 29 August 2025 14:51:19 +0000 (0:00:00.776) 0:00:07.778 ********* 2025-08-29 15:03:01.542786 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.542790 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.542793 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.542797 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.542801 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:01.542804 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.542808 | orchestrator | 2025-08-29 15:03:01.542812 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-08-29 15:03:01.542816 | orchestrator | Friday 29 August 2025 14:51:20 +0000 (0:00:01.098) 0:00:08.876 ********* 2025-08-29 15:03:01.542820 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.542825 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.542829 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.542833 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.542837 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.542841 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.542845 | orchestrator | 2025-08-29 15:03:01.542849 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-08-29 15:03:01.542853 | orchestrator | Friday 29 August 2025 14:51:21 +0000 (0:00:01.020) 0:00:09.896 ********* 2025-08-29 15:03:01.542857 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.542862 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.542866 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.542870 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.542874 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:01.542878 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.542882 | orchestrator | 2025-08-29 15:03:01.542886 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-08-29 15:03:01.542890 | orchestrator | Friday 29 August 2025 14:51:22 +0000 (0:00:01.515) 0:00:11.412 ********* 2025-08-29 15:03:01.542894 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-08-29 15:03:01.542898 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 15:03:01.542902 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 15:03:01.542906 | orchestrator | 2025-08-29 15:03:01.542911 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-08-29 15:03:01.542915 | orchestrator | Friday 29 August 2025 14:51:23 +0000 (0:00:00.710) 0:00:12.122 ********* 2025-08-29 15:03:01.542919 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.542923 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.542927 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.542931 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.542935 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:01.542939 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.542943 | orchestrator | 2025-08-29 15:03:01.542962 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-08-29 15:03:01.542967 | orchestrator | Friday 29 August 2025 14:51:25 +0000 (0:00:01.560) 0:00:13.682 ********* 2025-08-29 15:03:01.542971 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-08-29 15:03:01.542975 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 15:03:01.542984 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 15:03:01.542988 | orchestrator | 2025-08-29 15:03:01.542992 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-08-29 15:03:01.542996 | orchestrator | Friday 29 August 2025 14:51:28 +0000 (0:00:03.705) 0:00:17.387 ********* 2025-08-29 15:03:01.543000 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-08-29 15:03:01.543005 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-08-29 15:03:01.543010 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-08-29 15:03:01.543015 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.543020 | orchestrator | 2025-08-29 15:03:01.543025 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-08-29 15:03:01.543135 | orchestrator | Friday 29 August 2025 14:51:29 +0000 (0:00:00.816) 0:00:18.204 ********* 2025-08-29 15:03:01.543143 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.543150 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.543155 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.543160 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.543164 | orchestrator | 2025-08-29 15:03:01.543169 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-08-29 15:03:01.543174 | orchestrator | Friday 29 August 2025 14:51:30 +0000 (0:00:00.701) 0:00:18.906 ********* 2025-08-29 15:03:01.543181 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.543189 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.543194 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.543200 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.543204 | orchestrator | 2025-08-29 15:03:01.543210 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-08-29 15:03:01.543214 | orchestrator | Friday 29 August 2025 14:51:30 +0000 (0:00:00.272) 0:00:19.179 ********* 2025-08-29 15:03:01.543228 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-08-29 14:51:25.890293', 'end': '2025-08-29 14:51:26.187593', 'delta': '0:00:00.297300', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.543244 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-08-29 14:51:26.891094', 'end': '2025-08-29 14:51:27.155572', 'delta': '0:00:00.264478', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.543249 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-08-29 14:51:27.980442', 'end': '2025-08-29 14:51:28.222542', 'delta': '0:00:00.242100', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.543255 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.543259 | orchestrator | 2025-08-29 15:03:01.543264 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-08-29 15:03:01.543269 | orchestrator | Friday 29 August 2025 14:51:31 +0000 (0:00:00.750) 0:00:19.929 ********* 2025-08-29 15:03:01.543274 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.543279 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.543283 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.543288 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.543293 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:01.543298 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.543302 | orchestrator | 2025-08-29 15:03:01.543307 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-08-29 15:03:01.543312 | orchestrator | Friday 29 August 2025 14:51:34 +0000 (0:00:02.776) 0:00:22.705 ********* 2025-08-29 15:03:01.543317 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-08-29 15:03:01.543322 | orchestrator | 2025-08-29 15:03:01.543326 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-08-29 15:03:01.543331 | orchestrator | Friday 29 August 2025 14:51:34 +0000 (0:00:00.751) 0:00:23.457 ********* 2025-08-29 15:03:01.543336 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.543341 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.543346 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.543351 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.543356 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.543367 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.543372 | orchestrator | 2025-08-29 15:03:01.543377 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-08-29 15:03:01.543382 | orchestrator | Friday 29 August 2025 14:51:37 +0000 (0:00:02.068) 0:00:25.525 ********* 2025-08-29 15:03:01.543387 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.543391 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.543400 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.543405 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.543411 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.543415 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.543420 | orchestrator | 2025-08-29 15:03:01.543425 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-08-29 15:03:01.543430 | orchestrator | Friday 29 August 2025 14:51:39 +0000 (0:00:01.985) 0:00:27.510 ********* 2025-08-29 15:03:01.543434 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.543438 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.543443 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.543447 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.543451 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.543455 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.543459 | orchestrator | 2025-08-29 15:03:01.543463 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-08-29 15:03:01.543467 | orchestrator | Friday 29 August 2025 14:51:40 +0000 (0:00:01.835) 0:00:29.346 ********* 2025-08-29 15:03:01.543472 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.543476 | orchestrator | 2025-08-29 15:03:01.543480 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-08-29 15:03:01.543484 | orchestrator | Friday 29 August 2025 14:51:41 +0000 (0:00:00.451) 0:00:29.798 ********* 2025-08-29 15:03:01.543488 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.543492 | orchestrator | 2025-08-29 15:03:01.543497 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-08-29 15:03:01.543501 | orchestrator | Friday 29 August 2025 14:51:41 +0000 (0:00:00.341) 0:00:30.139 ********* 2025-08-29 15:03:01.543505 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.543509 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.543513 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.543517 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.543521 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.543525 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.543530 | orchestrator | 2025-08-29 15:03:01.543540 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-08-29 15:03:01.543544 | orchestrator | Friday 29 August 2025 14:51:42 +0000 (0:00:01.147) 0:00:31.287 ********* 2025-08-29 15:03:01.543548 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.543553 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.543557 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.543561 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.543565 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.543569 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.543573 | orchestrator | 2025-08-29 15:03:01.543577 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-08-29 15:03:01.543581 | orchestrator | Friday 29 August 2025 14:51:44 +0000 (0:00:01.353) 0:00:32.640 ********* 2025-08-29 15:03:01.543585 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.543590 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.543594 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.543598 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.543602 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.543606 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.543610 | orchestrator | 2025-08-29 15:03:01.543614 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-08-29 15:03:01.543619 | orchestrator | Friday 29 August 2025 14:51:45 +0000 (0:00:01.063) 0:00:33.704 ********* 2025-08-29 15:03:01.543623 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.543627 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.543631 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.543635 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.543639 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.543646 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.543651 | orchestrator | 2025-08-29 15:03:01.543667 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-08-29 15:03:01.543671 | orchestrator | Friday 29 August 2025 14:51:46 +0000 (0:00:01.112) 0:00:34.816 ********* 2025-08-29 15:03:01.543675 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.543680 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.543684 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.543688 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.543692 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.543696 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.543700 | orchestrator | 2025-08-29 15:03:01.543704 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-08-29 15:03:01.543708 | orchestrator | Friday 29 August 2025 14:51:47 +0000 (0:00:00.835) 0:00:35.652 ********* 2025-08-29 15:03:01.543713 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.543717 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.543721 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.543725 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.543729 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.543733 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.543737 | orchestrator | 2025-08-29 15:03:01.543742 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-08-29 15:03:01.543746 | orchestrator | Friday 29 August 2025 14:51:48 +0000 (0:00:01.039) 0:00:36.692 ********* 2025-08-29 15:03:01.543750 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.543754 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.543758 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.543762 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.543766 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.543770 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.543774 | orchestrator | 2025-08-29 15:03:01.543779 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-08-29 15:03:01.543783 | orchestrator | Friday 29 August 2025 14:51:49 +0000 (0:00:00.857) 0:00:37.549 ********* 2025-08-29 15:03:01.543788 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4c2f47a1--6693--5b64--9c97--de0e0041f7f6-osd--block--4c2f47a1--6693--5b64--9c97--de0e0041f7f6', 'dm-uuid-LVM-Bp5IZIwJszEoPKs6GxQSx36pvmgQf6q0IyrE6ewHb9DMU0L0xp7HNmv46iu3XxJl'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:01.543794 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--218f7b56--b785--5eaf--b35f--b0ddc87960c6-osd--block--218f7b56--b785--5eaf--b35f--b0ddc87960c6', 'dm-uuid-LVM-TCuPDh3Kkt6qr7lxpx96YD8cOAfUV0veRcqeVF90lZRHkrvUQO1xQSZkfh4ATm9z'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:01.543805 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:01.543810 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:01.543818 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--cd5b7d9a--1dd4--5184--a319--6c247fab2039-osd--block--cd5b7d9a--1dd4--5184--a319--6c247fab2039', 'dm-uuid-LVM-jvaVJ10Fcpsrf1MTBY8qTdZ2Gmf4tvfjrxCUn0JKSDlbhS0WygbP9Vo61jKxYujP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:01.543822 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:01.543827 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:01.543831 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--95dc25c6--61fb--51c1--a723--34c7e57ec220-osd--block--95dc25c6--61fb--51c1--a723--34c7e57ec220', 'dm-uuid-LVM-N95ON7yjd24XBIBBInOWMAWyxHtTxspjTYoG7FDOaB2vvWjw1Ow5naJsFLKGQSQe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:01.543835 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:01.543840 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:01.543844 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:01.543854 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:01.543862 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:01.543866 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:01.543870 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:01.544040 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:01.544046 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_12058b5d-7e0f-4769-b570-e8724a20121a', 'scsi-SQEMU_QEMU_HARDDISK_12058b5d-7e0f-4769-b570-e8724a20121a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_12058b5d-7e0f-4769-b570-e8724a20121a-part1', 'scsi-SQEMU_QEMU_HARDDISK_12058b5d-7e0f-4769-b570-e8724a20121a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_12058b5d-7e0f-4769-b570-e8724a20121a-part14', 'scsi-SQEMU_QEMU_HARDDISK_12058b5d-7e0f-4769-b570-e8724a20121a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_12058b5d-7e0f-4769-b570-e8724a20121a-part15', 'scsi-SQEMU_QEMU_HARDDISK_12058b5d-7e0f-4769-b570-e8724a20121a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_12058b5d-7e0f-4769-b570-e8724a20121a-part16', 'scsi-SQEMU_QEMU_HARDDISK_12058b5d-7e0f-4769-b570-e8724a20121a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:03:01.544068 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:01.544074 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--4c2f47a1--6693--5b64--9c97--de0e0041f7f6-osd--block--4c2f47a1--6693--5b64--9c97--de0e0041f7f6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-g6gG1M-IL2V-0Amf-c0c4-cNnW-1fYD-P7ziHe', 'scsi-0QEMU_QEMU_HARDDISK_8e840163-cd15-4bab-ac0d-7731db5a26c7', 'scsi-SQEMU_QEMU_HARDDISK_8e840163-cd15-4bab-ac0d-7731db5a26c7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:03:01.544079 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:01.544084 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--218f7b56--b785--5eaf--b35f--b0ddc87960c6-osd--block--218f7b56--b785--5eaf--b35f--b0ddc87960c6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-apmPRc-0tLh-gd7f-MAbU-v5aI-vXhU-6ffmio', 'scsi-0QEMU_QEMU_HARDDISK_b50f501b-7dcc-49bb-af34-bcea70be6a61', 'scsi-SQEMU_QEMU_HARDDISK_b50f501b-7dcc-49bb-af34-bcea70be6a61'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:03:01.544088 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:01.544093 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_34b7b0aa-9c3f-4af7-b9a4-6261675e7012', 'scsi-SQEMU_QEMU_HARDDISK_34b7b0aa-9c3f-4af7-b9a4-6261675e7012'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:03:01.544098 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:01.544112 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-07-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:03:01.544118 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a59956ee-14fc-4c64-8315-f5435014482a', 'scsi-SQEMU_QEMU_HARDDISK_a59956ee-14fc-4c64-8315-f5435014482a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a59956ee-14fc-4c64-8315-f5435014482a-part1', 'scsi-SQEMU_QEMU_HARDDISK_a59956ee-14fc-4c64-8315-f5435014482a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a59956ee-14fc-4c64-8315-f5435014482a-part14', 'scsi-SQEMU_QEMU_HARDDISK_a59956ee-14fc-4c64-8315-f5435014482a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a59956ee-14fc-4c64-8315-f5435014482a-part15', 'scsi-SQEMU_QEMU_HARDDISK_a59956ee-14fc-4c64-8315-f5435014482a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a59956ee-14fc-4c64-8315-f5435014482a-part16', 'scsi-SQEMU_QEMU_HARDDISK_a59956ee-14fc-4c64-8315-f5435014482a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:03:01.544123 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--cd5b7d9a--1dd4--5184--a319--6c247fab2039-osd--block--cd5b7d9a--1dd4--5184--a319--6c247fab2039'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-XDJnt6-q2Eo-YK5E-585i-i4Kv-BrAS-eFQVNK', 'scsi-0QEMU_QEMU_HARDDISK_fa9350c4-64bc-4afb-b502-f801a6f70a24', 'scsi-SQEMU_QEMU_HARDDISK_fa9350c4-64bc-4afb-b502-f801a6f70a24'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:03:01.544127 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--95dc25c6--61fb--51c1--a723--34c7e57ec220-osd--block--95dc25c6--61fb--51c1--a723--34c7e57ec220'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-gVkfSh-TfT9-3kSw-mC7z-BdmD-ou8j-HrzLNf', 'scsi-0QEMU_QEMU_HARDDISK_ce9b0281-40bf-44ad-b4c6-e5b614e2c1c9', 'scsi-SQEMU_QEMU_HARDDISK_ce9b0281-40bf-44ad-b4c6-e5b614e2c1c9'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:03:01.544141 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ea955146--254c--5a5a--83ec--c4f4ca16d6a1-osd--block--ea955146--254c--5a5a--83ec--c4f4ca16d6a1', 'dm-uuid-LVM-GA0Ozd01uf4NtDu82eUfKUTgH86R8g26EFOKy90nOoZcEQ0B214t60sPYQkvVMlo'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:01.544146 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--aeb09036--0b6a--534a--a94a--678fcf7bc5df-osd--block--aeb09036--0b6a--534a--a94a--678fcf7bc5df', 'dm-uuid-LVM-QWmHQARGOg6TrjUoKwNCJiKNVBi3jngnNrA7HneS0AvBK79Eij2Pgug45a7oXxag'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:01.544150 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4d11aa1-e648-4125-bb7f-b16cf1114c9f', 'scsi-SQEMU_QEMU_HARDDISK_d4d11aa1-e648-4125-bb7f-b16cf1114c9f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:03:01.544154 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:01.544159 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:01.544163 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:01.544167 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:01.544175 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:01.544184 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-07-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:03:01.544189 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:01.544193 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:01.544197 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:01.544202 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eca37a69-3c0f-4357-9670-f9669d9e69b8', 'scsi-SQEMU_QEMU_HARDDISK_eca37a69-3c0f-4357-9670-f9669d9e69b8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eca37a69-3c0f-4357-9670-f9669d9e69b8-part1', 'scsi-SQEMU_QEMU_HARDDISK_eca37a69-3c0f-4357-9670-f9669d9e69b8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eca37a69-3c0f-4357-9670-f9669d9e69b8-part14', 'scsi-SQEMU_QEMU_HARDDISK_eca37a69-3c0f-4357-9670-f9669d9e69b8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eca37a69-3c0f-4357-9670-f9669d9e69b8-part15', 'scsi-SQEMU_QEMU_HARDDISK_eca37a69-3c0f-4357-9670-f9669d9e69b8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eca37a69-3c0f-4357-9670-f9669d9e69b8-part16', 'scsi-SQEMU_QEMU_HARDDISK_eca37a69-3c0f-4357-9670-f9669d9e69b8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:03:01.544220 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.544230 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--ea955146--254c--5a5a--83ec--c4f4ca16d6a1-osd--block--ea955146--254c--5a5a--83ec--c4f4ca16d6a1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-XM2r6Z-mfGm-hHfb-jDir-devG-vA6W-Zce22J', 'scsi-0QEMU_QEMU_HARDDISK_00b08f76-6c14-40db-8d96-1843b494176b', 'scsi-SQEMU_QEMU_HARDDISK_00b08f76-6c14-40db-8d96-1843b494176b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:03:01.544235 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--aeb09036--0b6a--534a--a94a--678fcf7bc5df-osd--block--aeb09036--0b6a--534a--a94a--678fcf7bc5df'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-5LeC74-eFEO-dWsq-6JVp-2A0l-KQB8-M37UwZ', 'scsi-0QEMU_QEMU_HARDDISK_54964cbc-4c5d-4365-aa24-d13bcc6e495a', 'scsi-SQEMU_QEMU_HARDDISK_54964cbc-4c5d-4365-aa24-d13bcc6e495a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:03:01.544240 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8d4c2d77-38a8-4e70-8dcf-48e237e577e8', 'scsi-SQEMU_QEMU_HARDDISK_8d4c2d77-38a8-4e70-8dcf-48e237e577e8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:03:01.544244 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-07-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:03:01.544249 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.544253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:01.544258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:01.544266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:01.544270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:01.544280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:01.544285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:01.544289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:01.544293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:01.544298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:01.544302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:01.544306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:01.544314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:01.544325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_34603090-c146-4151-9356-33e1f81df516', 'scsi-SQEMU_QEMU_HARDDISK_34603090-c146-4151-9356-33e1f81df516'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_34603090-c146-4151-9356-33e1f81df516-part1', 'scsi-SQEMU_QEMU_HARDDISK_34603090-c146-4151-9356-33e1f81df516-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_34603090-c146-4151-9356-33e1f81df516-part14', 'scsi-SQEMU_QEMU_HARDDISK_34603090-c146-4151-9356-33e1f81df516-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_34603090-c146-4151-9356-33e1f81df516-part15', 'scsi-SQEMU_QEMU_HARDDISK_34603090-c146-4151-9356-33e1f81df516-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_34603090-c146-4151-9356-33e1f81df516-part16', 'scsi-SQEMU_QEMU_HARDDISK_34603090-c146-4151-9356-33e1f81df516-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:03:01.544330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:01.544334 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.544339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-08-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:03:01.544344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:01.544351 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.544355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:01.544360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:01.544371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_198780c2-b0aa-4267-81d7-dd433498eb4e', 'scsi-SQEMU_QEMU_HARDDISK_198780c2-b0aa-4267-81d7-dd433498eb4e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_198780c2-b0aa-4267-81d7-dd433498eb4e-part1', 'scsi-SQEMU_QEMU_HARDDISK_198780c2-b0aa-4267-81d7-dd433498eb4e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_198780c2-b0aa-4267-81d7-dd433498eb4e-part14', 'scsi-SQEMU_QEMU_HARDDISK_198780c2-b0aa-4267-81d7-dd433498eb4e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_198780c2-b0aa-4267-81d7-dd433498eb4e-part15', 'scsi-SQEMU_QEMU_HARDDISK_198780c2-b0aa-4267-81d7-dd433498eb4e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_198780c2-b0aa-4267-81d7-dd433498eb4e-part16', 'scsi-SQEMU_QEMU_HARDDISK_198780c2-b0aa-4267-81d7-dd433498eb4e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:03:01.544376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-08-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:03:01.544380 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.544391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:01.544395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:01.544399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:01.544404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:01.544411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:01.544415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:01.544420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:01.544424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:03:01.544474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26a03f40-a287-4201-85ef-dae46b1b8ac7', 'scsi-SQEMU_QEMU_HARDDISK_26a03f40-a287-4201-85ef-dae46b1b8ac7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26a03f40-a287-4201-85ef-dae46b1b8ac7-part1', 'scsi-SQEMU_QEMU_HARDDISK_26a03f40-a287-4201-85ef-dae46b1b8ac7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26a03f40-a287-4201-85ef-dae46b1b8ac7-part14', 'scsi-SQEMU_QEMU_HARDDISK_26a03f40-a287-4201-85ef-dae46b1b8ac7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26a03f40-a287-4201-85ef-dae46b1b8ac7-part15', 'scsi-SQEMU_QEMU_HARDDISK_26a03f40-a287-4201-85ef-dae46b1b8ac7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26a03f40-a287-4201-85ef-dae46b1b8ac7-part16', 'scsi-SQEMU_QEMU_HARDDISK_26a03f40-a287-4201-85ef-dae46b1b8ac7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:03:01.544495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-07-57-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:03:01.544500 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.544504 | orchestrator | 2025-08-29 15:03:01.544509 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-08-29 15:03:01.544513 | orchestrator | Friday 29 August 2025 14:51:50 +0000 (0:00:01.602) 0:00:39.152 ********* 2025-08-29 15:03:01.544518 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4c2f47a1--6693--5b64--9c97--de0e0041f7f6-osd--block--4c2f47a1--6693--5b64--9c97--de0e0041f7f6', 'dm-uuid-LVM-Bp5IZIwJszEoPKs6GxQSx36pvmgQf6q0IyrE6ewHb9DMU0L0xp7HNmv46iu3XxJl'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.545081 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--218f7b56--b785--5eaf--b35f--b0ddc87960c6-osd--block--218f7b56--b785--5eaf--b35f--b0ddc87960c6', 'dm-uuid-LVM-TCuPDh3Kkt6qr7lxpx96YD8cOAfUV0veRcqeVF90lZRHkrvUQO1xQSZkfh4ATm9z'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.545104 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.545109 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.545114 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.545122 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.545127 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.545131 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.545142 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.545152 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.545160 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_12058b5d-7e0f-4769-b570-e8724a20121a', 'scsi-SQEMU_QEMU_HARDDISK_12058b5d-7e0f-4769-b570-e8724a20121a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_12058b5d-7e0f-4769-b570-e8724a20121a-part1', 'scsi-SQEMU_QEMU_HARDDISK_12058b5d-7e0f-4769-b570-e8724a20121a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_12058b5d-7e0f-4769-b570-e8724a20121a-part14', 'scsi-SQEMU_QEMU_HARDDISK_12058b5d-7e0f-4769-b570-e8724a20121a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_12058b5d-7e0f-4769-b570-e8724a20121a-part15', 'scsi-SQEMU_QEMU_HARDDISK_12058b5d-7e0f-4769-b570-e8724a20121a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_12058b5d-7e0f-4769-b570-e8724a20121a-part16', 'scsi-SQEMU_QEMU_HARDDISK_12058b5d-7e0f-4769-b570-e8724a20121a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.545168 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--4c2f47a1--6693--5b64--9c97--de0e0041f7f6-osd--block--4c2f47a1--6693--5b64--9c97--de0e0041f7f6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-g6gG1M-IL2V-0Amf-c0c4-cNnW-1fYD-P7ziHe', 'scsi-0QEMU_QEMU_HARDDISK_8e840163-cd15-4bab-ac0d-7731db5a26c7', 'scsi-SQEMU_QEMU_HARDDISK_8e840163-cd15-4bab-ac0d-7731db5a26c7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.545178 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--218f7b56--b785--5eaf--b35f--b0ddc87960c6-osd--block--218f7b56--b785--5eaf--b35f--b0ddc87960c6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-apmPRc-0tLh-gd7f-MAbU-v5aI-vXhU-6ffmio', 'scsi-0QEMU_QEMU_HARDDISK_b50f501b-7dcc-49bb-af34-bcea70be6a61', 'scsi-SQEMU_QEMU_HARDDISK_b50f501b-7dcc-49bb-af34-bcea70be6a61'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.545183 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_34b7b0aa-9c3f-4af7-b9a4-6261675e7012', 'scsi-SQEMU_QEMU_HARDDISK_34b7b0aa-9c3f-4af7-b9a4-6261675e7012'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.545191 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-07-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.545195 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--cd5b7d9a--1dd4--5184--a319--6c247fab2039-osd--block--cd5b7d9a--1dd4--5184--a319--6c247fab2039', 'dm-uuid-LVM-jvaVJ10Fcpsrf1MTBY8qTdZ2Gmf4tvfjrxCUn0JKSDlbhS0WygbP9Vo61jKxYujP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.545203 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--95dc25c6--61fb--51c1--a723--34c7e57ec220-osd--block--95dc25c6--61fb--51c1--a723--34c7e57ec220', 'dm-uuid-LVM-N95ON7yjd24XBIBBInOWMAWyxHtTxspjTYoG7FDOaB2vvWjw1Ow5naJsFLKGQSQe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.545210 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.545214 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.545219 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.545225 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.545230 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.545234 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.545246 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.545251 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.545258 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a59956ee-14fc-4c64-8315-f5435014482a', 'scsi-SQEMU_QEMU_HARDDISK_a59956ee-14fc-4c64-8315-f5435014482a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a59956ee-14fc-4c64-8315-f5435014482a-part1', 'scsi-SQEMU_QEMU_HARDDISK_a59956ee-14fc-4c64-8315-f5435014482a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a59956ee-14fc-4c64-8315-f5435014482a-part14', 'scsi-SQEMU_QEMU_HARDDISK_a59956ee-14fc-4c64-8315-f5435014482a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a59956ee-14fc-4c64-8315-f5435014482a-part15', 'scsi-SQEMU_QEMU_HARDDISK_a59956ee-14fc-4c64-8315-f5435014482a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a59956ee-14fc-4c64-8315-f5435014482a-part16', 'scsi-SQEMU_QEMU_HARDDISK_a59956ee-14fc-4c64-8315-f5435014482a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.545266 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--cd5b7d9a--1dd4--5184--a319--6c247fab2039-osd--block--cd5b7d9a--1dd4--5184--a319--6c247fab2039'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-XDJnt6-q2Eo-YK5E-585i-i4Kv-BrAS-eFQVNK', 'scsi-0QEMU_QEMU_HARDDISK_fa9350c4-64bc-4afb-b502-f801a6f70a24', 'scsi-SQEMU_QEMU_HARDDISK_fa9350c4-64bc-4afb-b502-f801a6f70a24'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.545274 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--95dc25c6--61fb--51c1--a723--34c7e57ec220-osd--block--95dc25c6--61fb--51c1--a723--34c7e57ec220'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-gVkfSh-TfT9-3kSw-mC7z-BdmD-ou8j-HrzLNf', 'scsi-0QEMU_QEMU_HARDDISK_ce9b0281-40bf-44ad-b4c6-e5b614e2c1c9', 'scsi-SQEMU_QEMU_HARDDISK_ce9b0281-40bf-44ad-b4c6-e5b614e2c1c9'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.545279 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4d11aa1-e648-4125-bb7f-b16cf1114c9f', 'scsi-SQEMU_QEMU_HARDDISK_d4d11aa1-e648-4125-bb7f-b16cf1114c9f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.545288 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-07-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.545293 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.545297 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ea955146--254c--5a5a--83ec--c4f4ca16d6a1-osd--block--ea955146--254c--5a5a--83ec--c4f4ca16d6a1', 'dm-uuid-LVM-GA0Ozd01uf4NtDu82eUfKUTgH86R8g26EFOKy90nOoZcEQ0B214t60sPYQkvVMlo'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.545304 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.545311 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--aeb09036--0b6a--534a--a94a--678fcf7bc5df-osd--block--aeb09036--0b6a--534a--a94a--678fcf7bc5df', 'dm-uuid-LVM-QWmHQARGOg6TrjUoKwNCJiKNVBi3jngnNrA7HneS0AvBK79Eij2Pgug45a7oXxag'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.545316 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.545320 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.545325 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.545331 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.545336 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.545344 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.545353 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.545357 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.545362 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.545366 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.545373 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.545377 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.546401 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eca37a69-3c0f-4357-9670-f9669d9e69b8', 'scsi-SQEMU_QEMU_HARDDISK_eca37a69-3c0f-4357-9670-f9669d9e69b8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eca37a69-3c0f-4357-9670-f9669d9e69b8-part1', 'scsi-SQEMU_QEMU_HARDDISK_eca37a69-3c0f-4357-9670-f9669d9e69b8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eca37a69-3c0f-4357-9670-f9669d9e69b8-part14', 'scsi-SQEMU_QEMU_HARDDISK_eca37a69-3c0f-4357-9670-f9669d9e69b8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eca37a69-3c0f-4357-9670-f9669d9e69b8-part15', 'scsi-SQEMU_QEMU_HARDDISK_eca37a69-3c0f-4357-9670-f9669d9e69b8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eca37a69-3c0f-4357-9670-f9669d9e69b8-part16', 'scsi-SQEMU_QEMU_HARDDISK_eca37a69-3c0f-4357-9670-f9669d9e69b8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.546508 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--ea955146--254c--5a5a--83ec--c4f4ca16d6a1-osd--block--ea955146--254c--5a5a--83ec--c4f4ca16d6a1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-XM2r6Z-mfGm-hHfb-jDir-devG-vA6W-Zce22J', 'scsi-0QEMU_QEMU_HARDDISK_00b08f76-6c14-40db-8d96-1843b494176b', 'scsi-SQEMU_QEMU_HARDDISK_00b08f76-6c14-40db-8d96-1843b494176b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.546537 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.546602 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--aeb09036--0b6a--534a--a94a--678fcf7bc5df-osd--block--aeb09036--0b6a--534a--a94a--678fcf7bc5df'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-5LeC74-eFEO-dWsq-6JVp-2A0l-KQB8-M37UwZ', 'scsi-0QEMU_QEMU_HARDDISK_54964cbc-4c5d-4365-aa24-d13bcc6e495a', 'scsi-SQEMU_QEMU_HARDDISK_54964cbc-4c5d-4365-aa24-d13bcc6e495a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.546623 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.546642 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8d4c2d77-38a8-4e70-8dcf-48e237e577e8', 'scsi-SQEMU_QEMU_HARDDISK_8d4c2d77-38a8-4e70-8dcf-48e237e577e8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.546882 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.546913 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-07-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.546946 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.546982 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_34603090-c146-4151-9356-33e1f81df516', 'scsi-SQEMU_QEMU_HARDDISK_34603090-c146-4151-9356-33e1f81df516'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_34603090-c146-4151-9356-33e1f81df516-part1', 'scsi-SQEMU_QEMU_HARDDISK_34603090-c146-4151-9356-33e1f81df516-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_34603090-c146-4151-9356-33e1f81df516-part14', 'scsi-SQEMU_QEMU_HARDDISK_34603090-c146-4151-9356-33e1f81df516-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_34603090-c146-4151-9356-33e1f81df516-part15', 'scsi-SQEMU_QEMU_HARDDISK_34603090-c146-4151-9356-33e1f81df516-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_34603090-c146-4151-9356-33e1f81df516-part16', 'scsi-SQEMU_QEMU_HARDDISK_34603090-c146-4151-9356-33e1f81df516-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.547009 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-08-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.547029 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.547048 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.547076 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.547104 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.547123 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.547141 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.547159 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.547182 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.547210 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.547239 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_198780c2-b0aa-4267-81d7-dd433498eb4e', 'scsi-SQEMU_QEMU_HARDDISK_198780c2-b0aa-4267-81d7-dd433498eb4e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_198780c2-b0aa-4267-81d7-dd433498eb4e-part1', 'scsi-SQEMU_QEMU_HARDDISK_198780c2-b0aa-4267-81d7-dd433498eb4e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_198780c2-b0aa-4267-81d7-dd433498eb4e-part14', 'scsi-SQEMU_QEMU_HARDDISK_198780c2-b0aa-4267-81d7-dd433498eb4e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_198780c2-b0aa-4267-81d7-dd433498eb4e-part15', 'scsi-SQEMU_QEMU_HARDDISK_198780c2-b0aa-4267-81d7-dd433498eb4e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_198780c2-b0aa-4267-81d7-dd433498eb4e-part16', 'scsi-SQEMU_QEMU_HARDDISK_198780c2-b0aa-4267-81d7-dd433498eb4e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.547258 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-08-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.547284 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.547302 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.547333 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.547351 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.547379 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.547401 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.547420 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.547438 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.547465 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.547574 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.548052 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26a03f40-a287-4201-85ef-dae46b1b8ac7', 'scsi-SQEMU_QEMU_HARDDISK_26a03f40-a287-4201-85ef-dae46b1b8ac7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26a03f40-a287-4201-85ef-dae46b1b8ac7-part1', 'scsi-SQEMU_QEMU_HARDDISK_26a03f40-a287-4201-85ef-dae46b1b8ac7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26a03f40-a287-4201-85ef-dae46b1b8ac7-part14', 'scsi-SQEMU_QEMU_HARDDISK_26a03f40-a287-4201-85ef-dae46b1b8ac7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26a03f40-a287-4201-85ef-dae46b1b8ac7-part15', 'scsi-SQEMU_QEMU_HARDDISK_26a03f40-a287-4201-85ef-dae46b1b8ac7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26a03f40-a287-4201-85ef-dae46b1b8ac7-part16', 'scsi-SQEMU_QEMU_HARDDISK_26a03f40-a287-4201-85ef-dae46b1b8ac7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.548087 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-07-57-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:03:01.548110 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.548123 | orchestrator | 2025-08-29 15:03:01.548135 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-08-29 15:03:01.548147 | orchestrator | Friday 29 August 2025 14:51:52 +0000 (0:00:01.659) 0:00:40.811 ********* 2025-08-29 15:03:01.548165 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.548178 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.548188 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.548200 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.548210 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:01.548221 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.548232 | orchestrator | 2025-08-29 15:03:01.548244 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-08-29 15:03:01.548255 | orchestrator | Friday 29 August 2025 14:51:54 +0000 (0:00:01.704) 0:00:42.516 ********* 2025-08-29 15:03:01.548266 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.548277 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.548289 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.548300 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.548312 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:01.548323 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.548334 | orchestrator | 2025-08-29 15:03:01.548345 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-08-29 15:03:01.548357 | orchestrator | Friday 29 August 2025 14:51:55 +0000 (0:00:01.226) 0:00:43.742 ********* 2025-08-29 15:03:01.548368 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.548379 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.548390 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.548401 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.548412 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.549062 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.549101 | orchestrator | 2025-08-29 15:03:01.549269 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-08-29 15:03:01.549503 | orchestrator | Friday 29 August 2025 14:51:56 +0000 (0:00:00.959) 0:00:44.702 ********* 2025-08-29 15:03:01.549523 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.549534 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.549545 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.549556 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.549568 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.549579 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.549590 | orchestrator | 2025-08-29 15:03:01.549601 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-08-29 15:03:01.549612 | orchestrator | Friday 29 August 2025 14:51:56 +0000 (0:00:00.777) 0:00:45.479 ********* 2025-08-29 15:03:01.549623 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.549635 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.549647 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.549690 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.549841 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.550183 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.550213 | orchestrator | 2025-08-29 15:03:01.550234 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-08-29 15:03:01.550250 | orchestrator | Friday 29 August 2025 14:51:58 +0000 (0:00:01.122) 0:00:46.602 ********* 2025-08-29 15:03:01.550261 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.550273 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.550284 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.550295 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.550306 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.550332 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.550343 | orchestrator | 2025-08-29 15:03:01.550407 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-08-29 15:03:01.550419 | orchestrator | Friday 29 August 2025 14:51:59 +0000 (0:00:01.235) 0:00:47.837 ********* 2025-08-29 15:03:01.550434 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-08-29 15:03:01.550446 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-08-29 15:03:01.550459 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-08-29 15:03:01.550473 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-08-29 15:03:01.551399 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-08-29 15:03:01.551434 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-08-29 15:03:01.551448 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-08-29 15:03:01.551461 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-08-29 15:03:01.551473 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-08-29 15:03:01.551486 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-08-29 15:03:01.551500 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-08-29 15:03:01.551512 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-08-29 15:03:01.551525 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-08-29 15:03:01.551538 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-08-29 15:03:01.551552 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-08-29 15:03:01.551565 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-08-29 15:03:01.551577 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-08-29 15:03:01.551590 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-08-29 15:03:01.551604 | orchestrator | 2025-08-29 15:03:01.551618 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-08-29 15:03:01.551632 | orchestrator | Friday 29 August 2025 14:52:05 +0000 (0:00:06.376) 0:00:54.213 ********* 2025-08-29 15:03:01.551644 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-08-29 15:03:01.551771 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-08-29 15:03:01.551796 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-08-29 15:03:01.551811 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.551825 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-08-29 15:03:01.551838 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-08-29 15:03:01.551851 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-08-29 15:03:01.551864 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.551877 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-08-29 15:03:01.551889 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-08-29 15:03:01.551902 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-08-29 15:03:01.551928 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.551942 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-08-29 15:03:01.551954 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-08-29 15:03:01.551967 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-08-29 15:03:01.551980 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.551993 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-08-29 15:03:01.552006 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-08-29 15:03:01.552018 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-08-29 15:03:01.552031 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.552044 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-08-29 15:03:01.552058 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-08-29 15:03:01.552071 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-08-29 15:03:01.552098 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.552111 | orchestrator | 2025-08-29 15:03:01.552124 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-08-29 15:03:01.552137 | orchestrator | Friday 29 August 2025 14:52:07 +0000 (0:00:01.370) 0:00:55.584 ********* 2025-08-29 15:03:01.552149 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.552162 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.552175 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.552188 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:03:01.552199 | orchestrator | 2025-08-29 15:03:01.552210 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-08-29 15:03:01.552222 | orchestrator | Friday 29 August 2025 14:52:08 +0000 (0:00:01.515) 0:00:57.100 ********* 2025-08-29 15:03:01.552242 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.552259 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.552275 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.552293 | orchestrator | 2025-08-29 15:03:01.552308 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-08-29 15:03:01.552324 | orchestrator | Friday 29 August 2025 14:52:09 +0000 (0:00:00.486) 0:00:57.586 ********* 2025-08-29 15:03:01.552414 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.552435 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.552452 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.552462 | orchestrator | 2025-08-29 15:03:01.552473 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-08-29 15:03:01.552483 | orchestrator | Friday 29 August 2025 14:52:09 +0000 (0:00:00.413) 0:00:58.000 ********* 2025-08-29 15:03:01.552492 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.552502 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.552512 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.552521 | orchestrator | 2025-08-29 15:03:01.552531 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-08-29 15:03:01.552541 | orchestrator | Friday 29 August 2025 14:52:10 +0000 (0:00:00.751) 0:00:58.752 ********* 2025-08-29 15:03:01.552551 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.552561 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.552571 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.552580 | orchestrator | 2025-08-29 15:03:01.552590 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-08-29 15:03:01.552600 | orchestrator | Friday 29 August 2025 14:52:11 +0000 (0:00:00.962) 0:00:59.714 ********* 2025-08-29 15:03:01.552609 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 15:03:01.552619 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 15:03:01.552629 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 15:03:01.552638 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.552648 | orchestrator | 2025-08-29 15:03:01.552696 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-08-29 15:03:01.552708 | orchestrator | Friday 29 August 2025 14:52:11 +0000 (0:00:00.613) 0:01:00.328 ********* 2025-08-29 15:03:01.552717 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 15:03:01.552727 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 15:03:01.552737 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 15:03:01.552747 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.552756 | orchestrator | 2025-08-29 15:03:01.552766 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-08-29 15:03:01.552776 | orchestrator | Friday 29 August 2025 14:52:12 +0000 (0:00:01.010) 0:01:01.339 ********* 2025-08-29 15:03:01.552786 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 15:03:01.552807 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 15:03:01.552817 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 15:03:01.552827 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.552836 | orchestrator | 2025-08-29 15:03:01.552846 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-08-29 15:03:01.552856 | orchestrator | Friday 29 August 2025 14:52:13 +0000 (0:00:00.490) 0:01:01.830 ********* 2025-08-29 15:03:01.552866 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.552875 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.552885 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.552895 | orchestrator | 2025-08-29 15:03:01.552905 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-08-29 15:03:01.552915 | orchestrator | Friday 29 August 2025 14:52:13 +0000 (0:00:00.552) 0:01:02.383 ********* 2025-08-29 15:03:01.552924 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-08-29 15:03:01.552934 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-08-29 15:03:01.552944 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-08-29 15:03:01.552954 | orchestrator | 2025-08-29 15:03:01.552963 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-08-29 15:03:01.552981 | orchestrator | Friday 29 August 2025 14:52:15 +0000 (0:00:01.591) 0:01:03.974 ********* 2025-08-29 15:03:01.552991 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-08-29 15:03:01.553002 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 15:03:01.553012 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 15:03:01.553021 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-08-29 15:03:01.553032 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-08-29 15:03:01.553042 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-08-29 15:03:01.553051 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-08-29 15:03:01.553061 | orchestrator | 2025-08-29 15:03:01.553071 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-08-29 15:03:01.553081 | orchestrator | Friday 29 August 2025 14:52:16 +0000 (0:00:01.160) 0:01:05.134 ********* 2025-08-29 15:03:01.553090 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-08-29 15:03:01.553100 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 15:03:01.553109 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 15:03:01.553119 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-08-29 15:03:01.553129 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-08-29 15:03:01.553139 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-08-29 15:03:01.553149 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-08-29 15:03:01.553158 | orchestrator | 2025-08-29 15:03:01.553168 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-08-29 15:03:01.553178 | orchestrator | Friday 29 August 2025 14:52:18 +0000 (0:00:02.174) 0:01:07.309 ********* 2025-08-29 15:03:01.553227 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:03:01.553241 | orchestrator | 2025-08-29 15:03:01.553251 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-08-29 15:03:01.553261 | orchestrator | Friday 29 August 2025 14:52:20 +0000 (0:00:01.624) 0:01:08.934 ********* 2025-08-29 15:03:01.553271 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:03:01.553289 | orchestrator | 2025-08-29 15:03:01.553299 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-08-29 15:03:01.553309 | orchestrator | Friday 29 August 2025 14:52:22 +0000 (0:00:01.945) 0:01:10.879 ********* 2025-08-29 15:03:01.553319 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.553328 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.553338 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.553348 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.553358 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:01.553367 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.553377 | orchestrator | 2025-08-29 15:03:01.553387 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-08-29 15:03:01.553396 | orchestrator | Friday 29 August 2025 14:52:24 +0000 (0:00:01.996) 0:01:12.876 ********* 2025-08-29 15:03:01.553406 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.553416 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.553425 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.553435 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.553444 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.553454 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.553464 | orchestrator | 2025-08-29 15:03:01.553474 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-08-29 15:03:01.553483 | orchestrator | Friday 29 August 2025 14:52:25 +0000 (0:00:01.146) 0:01:14.022 ********* 2025-08-29 15:03:01.553493 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.553503 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.553512 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.553522 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.553532 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.553542 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.553551 | orchestrator | 2025-08-29 15:03:01.553561 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-08-29 15:03:01.553571 | orchestrator | Friday 29 August 2025 14:52:27 +0000 (0:00:01.638) 0:01:15.660 ********* 2025-08-29 15:03:01.553580 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.553590 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.553600 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.553609 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.553619 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.553629 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.553639 | orchestrator | 2025-08-29 15:03:01.553649 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-08-29 15:03:01.553682 | orchestrator | Friday 29 August 2025 14:52:28 +0000 (0:00:01.664) 0:01:17.325 ********* 2025-08-29 15:03:01.553693 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.553703 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.553712 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.553722 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.553732 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:01.553741 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.553751 | orchestrator | 2025-08-29 15:03:01.553761 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-08-29 15:03:01.553776 | orchestrator | Friday 29 August 2025 14:52:31 +0000 (0:00:02.377) 0:01:19.703 ********* 2025-08-29 15:03:01.553786 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.553796 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.553805 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.553815 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.553824 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.553834 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.553844 | orchestrator | 2025-08-29 15:03:01.553854 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-08-29 15:03:01.553863 | orchestrator | Friday 29 August 2025 14:52:32 +0000 (0:00:00.973) 0:01:20.677 ********* 2025-08-29 15:03:01.553879 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.553889 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.553898 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.553908 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.553918 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.553927 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.553937 | orchestrator | 2025-08-29 15:03:01.553946 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-08-29 15:03:01.553956 | orchestrator | Friday 29 August 2025 14:52:33 +0000 (0:00:01.619) 0:01:22.296 ********* 2025-08-29 15:03:01.553966 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.553976 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.553985 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.553995 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.554005 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:01.554014 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.554063 | orchestrator | 2025-08-29 15:03:01.554074 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-08-29 15:03:01.554084 | orchestrator | Friday 29 August 2025 14:52:35 +0000 (0:00:01.494) 0:01:23.791 ********* 2025-08-29 15:03:01.554094 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.554104 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.554114 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.554124 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.554133 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:01.554142 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.554152 | orchestrator | 2025-08-29 15:03:01.554162 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-08-29 15:03:01.554172 | orchestrator | Friday 29 August 2025 14:52:36 +0000 (0:00:01.478) 0:01:25.269 ********* 2025-08-29 15:03:01.554182 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.554192 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.554237 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.554249 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.554259 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.554269 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.554279 | orchestrator | 2025-08-29 15:03:01.554289 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-08-29 15:03:01.554299 | orchestrator | Friday 29 August 2025 14:52:37 +0000 (0:00:00.836) 0:01:26.106 ********* 2025-08-29 15:03:01.554309 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.554319 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.554328 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.554338 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.554348 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:01.554357 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.554367 | orchestrator | 2025-08-29 15:03:01.554377 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-08-29 15:03:01.554387 | orchestrator | Friday 29 August 2025 14:52:38 +0000 (0:00:00.700) 0:01:26.806 ********* 2025-08-29 15:03:01.554396 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.554406 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.554416 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.554426 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.554436 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.554445 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.554455 | orchestrator | 2025-08-29 15:03:01.554465 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-08-29 15:03:01.554475 | orchestrator | Friday 29 August 2025 14:52:39 +0000 (0:00:01.026) 0:01:27.833 ********* 2025-08-29 15:03:01.554485 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.554494 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.554504 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.554514 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.554531 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.554541 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.554551 | orchestrator | 2025-08-29 15:03:01.554561 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-08-29 15:03:01.554570 | orchestrator | Friday 29 August 2025 14:52:40 +0000 (0:00:00.793) 0:01:28.626 ********* 2025-08-29 15:03:01.554580 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.554590 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.554600 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.554610 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.554620 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.554630 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.554640 | orchestrator | 2025-08-29 15:03:01.554649 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-08-29 15:03:01.554678 | orchestrator | Friday 29 August 2025 14:52:41 +0000 (0:00:01.024) 0:01:29.651 ********* 2025-08-29 15:03:01.554689 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.554699 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.554709 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.554719 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.554728 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.554738 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.554748 | orchestrator | 2025-08-29 15:03:01.554758 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-08-29 15:03:01.554768 | orchestrator | Friday 29 August 2025 14:52:41 +0000 (0:00:00.598) 0:01:30.250 ********* 2025-08-29 15:03:01.554778 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.554787 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.554797 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.554807 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.554817 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.554827 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.554837 | orchestrator | 2025-08-29 15:03:01.554852 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-08-29 15:03:01.554863 | orchestrator | Friday 29 August 2025 14:52:42 +0000 (0:00:00.898) 0:01:31.149 ********* 2025-08-29 15:03:01.554873 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.554882 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.554892 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.554902 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.554912 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:01.554922 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.554932 | orchestrator | 2025-08-29 15:03:01.554942 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-08-29 15:03:01.554952 | orchestrator | Friday 29 August 2025 14:52:43 +0000 (0:00:00.941) 0:01:32.090 ********* 2025-08-29 15:03:01.554961 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.554971 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.554981 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.554991 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.555000 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:01.555010 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.555020 | orchestrator | 2025-08-29 15:03:01.555030 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-08-29 15:03:01.555040 | orchestrator | Friday 29 August 2025 14:52:44 +0000 (0:00:00.896) 0:01:32.987 ********* 2025-08-29 15:03:01.555050 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.555060 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.555070 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.555080 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.555090 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:01.555100 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.555111 | orchestrator | 2025-08-29 15:03:01.555127 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-08-29 15:03:01.555161 | orchestrator | Friday 29 August 2025 14:52:46 +0000 (0:00:01.720) 0:01:34.707 ********* 2025-08-29 15:03:01.555177 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:03:01.555194 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:03:01.555210 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:03:01.555223 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:03:01.555233 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:03:01.555242 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:03:01.555252 | orchestrator | 2025-08-29 15:03:01.555262 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-08-29 15:03:01.555272 | orchestrator | Friday 29 August 2025 14:52:47 +0000 (0:00:01.374) 0:01:36.081 ********* 2025-08-29 15:03:01.555282 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:03:01.555329 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:03:01.555340 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:03:01.555350 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:03:01.555359 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:03:01.555369 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:03:01.555378 | orchestrator | 2025-08-29 15:03:01.555388 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-08-29 15:03:01.555398 | orchestrator | Friday 29 August 2025 14:52:49 +0000 (0:00:02.008) 0:01:38.090 ********* 2025-08-29 15:03:01.555409 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:03:01.555418 | orchestrator | 2025-08-29 15:03:01.555428 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-08-29 15:03:01.555438 | orchestrator | Friday 29 August 2025 14:52:50 +0000 (0:00:01.076) 0:01:39.167 ********* 2025-08-29 15:03:01.555447 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.555457 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.555466 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.555476 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.555485 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.555495 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.555504 | orchestrator | 2025-08-29 15:03:01.555515 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-08-29 15:03:01.555524 | orchestrator | Friday 29 August 2025 14:52:51 +0000 (0:00:00.535) 0:01:39.702 ********* 2025-08-29 15:03:01.555534 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.555544 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.555554 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.555563 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.555573 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.555583 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.555592 | orchestrator | 2025-08-29 15:03:01.555602 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-08-29 15:03:01.555612 | orchestrator | Friday 29 August 2025 14:52:51 +0000 (0:00:00.687) 0:01:40.390 ********* 2025-08-29 15:03:01.555621 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-08-29 15:03:01.555631 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-08-29 15:03:01.555640 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-08-29 15:03:01.555650 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-08-29 15:03:01.555819 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-08-29 15:03:01.555851 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-08-29 15:03:01.555861 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-08-29 15:03:01.555871 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-08-29 15:03:01.555881 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-08-29 15:03:01.555906 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-08-29 15:03:01.555916 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-08-29 15:03:01.555932 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-08-29 15:03:01.555942 | orchestrator | 2025-08-29 15:03:01.555952 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-08-29 15:03:01.555962 | orchestrator | Friday 29 August 2025 14:52:53 +0000 (0:00:01.192) 0:01:41.583 ********* 2025-08-29 15:03:01.555972 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:03:01.555982 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:03:01.555991 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:03:01.556001 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:03:01.556011 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:03:01.556021 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:03:01.556031 | orchestrator | 2025-08-29 15:03:01.556040 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-08-29 15:03:01.556050 | orchestrator | Friday 29 August 2025 14:52:54 +0000 (0:00:01.035) 0:01:42.618 ********* 2025-08-29 15:03:01.556060 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.556070 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.556080 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.556090 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.556099 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.556109 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.556119 | orchestrator | 2025-08-29 15:03:01.556129 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-08-29 15:03:01.556138 | orchestrator | Friday 29 August 2025 14:52:54 +0000 (0:00:00.542) 0:01:43.160 ********* 2025-08-29 15:03:01.556148 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.556157 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.556167 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.556177 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.556186 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.556196 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.556206 | orchestrator | 2025-08-29 15:03:01.556216 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-08-29 15:03:01.556225 | orchestrator | Friday 29 August 2025 14:52:55 +0000 (0:00:00.734) 0:01:43.895 ********* 2025-08-29 15:03:01.556234 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.556242 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.556250 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.556258 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.556266 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.556274 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.556282 | orchestrator | 2025-08-29 15:03:01.556347 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-08-29 15:03:01.556357 | orchestrator | Friday 29 August 2025 14:52:55 +0000 (0:00:00.529) 0:01:44.425 ********* 2025-08-29 15:03:01.556365 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:03:01.556373 | orchestrator | 2025-08-29 15:03:01.556381 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-08-29 15:03:01.556389 | orchestrator | Friday 29 August 2025 14:52:57 +0000 (0:00:01.080) 0:01:45.506 ********* 2025-08-29 15:03:01.556397 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.556405 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:01.556413 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.556421 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.556429 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.556444 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.556452 | orchestrator | 2025-08-29 15:03:01.556460 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-08-29 15:03:01.556468 | orchestrator | Friday 29 August 2025 14:54:23 +0000 (0:01:26.297) 0:03:11.803 ********* 2025-08-29 15:03:01.556476 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-08-29 15:03:01.556483 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-08-29 15:03:01.556491 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-08-29 15:03:01.556499 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.556507 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-08-29 15:03:01.556515 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-08-29 15:03:01.556523 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-08-29 15:03:01.556531 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.556540 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-08-29 15:03:01.556548 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-08-29 15:03:01.556556 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-08-29 15:03:01.556564 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-08-29 15:03:01.556572 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-08-29 15:03:01.556579 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-08-29 15:03:01.556587 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.556595 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-08-29 15:03:01.556603 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-08-29 15:03:01.556611 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-08-29 15:03:01.556619 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.556627 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.556635 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-08-29 15:03:01.556643 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-08-29 15:03:01.556684 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-08-29 15:03:01.556700 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.556714 | orchestrator | 2025-08-29 15:03:01.556727 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-08-29 15:03:01.556742 | orchestrator | Friday 29 August 2025 14:54:24 +0000 (0:00:01.060) 0:03:12.864 ********* 2025-08-29 15:03:01.556751 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.556759 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.556767 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.556774 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.556782 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.556790 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.556798 | orchestrator | 2025-08-29 15:03:01.556806 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-08-29 15:03:01.556814 | orchestrator | Friday 29 August 2025 14:54:25 +0000 (0:00:01.197) 0:03:14.061 ********* 2025-08-29 15:03:01.556822 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.556830 | orchestrator | 2025-08-29 15:03:01.556838 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-08-29 15:03:01.556846 | orchestrator | Friday 29 August 2025 14:54:25 +0000 (0:00:00.175) 0:03:14.237 ********* 2025-08-29 15:03:01.556854 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.556862 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.556870 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.556888 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.556896 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.556904 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.556912 | orchestrator | 2025-08-29 15:03:01.556920 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-08-29 15:03:01.556929 | orchestrator | Friday 29 August 2025 14:54:26 +0000 (0:00:00.740) 0:03:14.978 ********* 2025-08-29 15:03:01.556937 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.556945 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.556952 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.556960 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.556980 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.556988 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.556996 | orchestrator | 2025-08-29 15:03:01.557004 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-08-29 15:03:01.557012 | orchestrator | Friday 29 August 2025 14:54:27 +0000 (0:00:01.070) 0:03:16.048 ********* 2025-08-29 15:03:01.557053 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.557063 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.557071 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.557078 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.557086 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.557094 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.557102 | orchestrator | 2025-08-29 15:03:01.557110 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-08-29 15:03:01.557118 | orchestrator | Friday 29 August 2025 14:54:28 +0000 (0:00:00.991) 0:03:17.040 ********* 2025-08-29 15:03:01.557126 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.557134 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.557142 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:01.557150 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.557158 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.557166 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.557174 | orchestrator | 2025-08-29 15:03:01.557182 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-08-29 15:03:01.557190 | orchestrator | Friday 29 August 2025 14:54:31 +0000 (0:00:02.792) 0:03:19.832 ********* 2025-08-29 15:03:01.557198 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.557206 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.557214 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.557222 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.557230 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:01.557238 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.557246 | orchestrator | 2025-08-29 15:03:01.557254 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-08-29 15:03:01.557262 | orchestrator | Friday 29 August 2025 14:54:32 +0000 (0:00:00.739) 0:03:20.571 ********* 2025-08-29 15:03:01.557271 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-4, testbed-node-3, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:03:01.557280 | orchestrator | 2025-08-29 15:03:01.557288 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-08-29 15:03:01.557296 | orchestrator | Friday 29 August 2025 14:54:33 +0000 (0:00:01.506) 0:03:22.078 ********* 2025-08-29 15:03:01.557304 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.557312 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.557321 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.557330 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.557339 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.557347 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.557356 | orchestrator | 2025-08-29 15:03:01.557365 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-08-29 15:03:01.557374 | orchestrator | Friday 29 August 2025 14:54:34 +0000 (0:00:01.039) 0:03:23.118 ********* 2025-08-29 15:03:01.557390 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.557399 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.557408 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.557417 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.557425 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.557434 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.557442 | orchestrator | 2025-08-29 15:03:01.557451 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-08-29 15:03:01.557460 | orchestrator | Friday 29 August 2025 14:54:36 +0000 (0:00:01.416) 0:03:24.534 ********* 2025-08-29 15:03:01.557469 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.557477 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.557486 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.557495 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.557503 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.557512 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.557520 | orchestrator | 2025-08-29 15:03:01.557535 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-08-29 15:03:01.557544 | orchestrator | Friday 29 August 2025 14:54:36 +0000 (0:00:00.859) 0:03:25.394 ********* 2025-08-29 15:03:01.557553 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.557561 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.557570 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.557578 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.557587 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.557596 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.557604 | orchestrator | 2025-08-29 15:03:01.557613 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-08-29 15:03:01.557622 | orchestrator | Friday 29 August 2025 14:54:37 +0000 (0:00:00.616) 0:03:26.010 ********* 2025-08-29 15:03:01.557630 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.557639 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.557647 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.557680 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.557692 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.557700 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.557709 | orchestrator | 2025-08-29 15:03:01.557718 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-08-29 15:03:01.557727 | orchestrator | Friday 29 August 2025 14:54:38 +0000 (0:00:00.817) 0:03:26.827 ********* 2025-08-29 15:03:01.557736 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.557744 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.557753 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.557761 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.557770 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.557778 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.557787 | orchestrator | 2025-08-29 15:03:01.557796 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-08-29 15:03:01.557805 | orchestrator | Friday 29 August 2025 14:54:39 +0000 (0:00:00.781) 0:03:27.609 ********* 2025-08-29 15:03:01.557814 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.557822 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.557831 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.557839 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.557848 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.557857 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.557866 | orchestrator | 2025-08-29 15:03:01.557875 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-08-29 15:03:01.557919 | orchestrator | Friday 29 August 2025 14:54:39 +0000 (0:00:00.640) 0:03:28.250 ********* 2025-08-29 15:03:01.557929 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.557938 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.557956 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.557965 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.557974 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.557982 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.557991 | orchestrator | 2025-08-29 15:03:01.558000 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-08-29 15:03:01.558008 | orchestrator | Friday 29 August 2025 14:54:40 +0000 (0:00:00.850) 0:03:29.100 ********* 2025-08-29 15:03:01.558054 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.558066 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.558075 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.558084 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.558093 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:01.558102 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.558111 | orchestrator | 2025-08-29 15:03:01.558119 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-08-29 15:03:01.558128 | orchestrator | Friday 29 August 2025 14:54:41 +0000 (0:00:01.146) 0:03:30.247 ********* 2025-08-29 15:03:01.558138 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:03:01.558147 | orchestrator | 2025-08-29 15:03:01.558156 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-08-29 15:03:01.558164 | orchestrator | Friday 29 August 2025 14:54:43 +0000 (0:00:01.395) 0:03:31.643 ********* 2025-08-29 15:03:01.558173 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-08-29 15:03:01.558182 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-08-29 15:03:01.558191 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-08-29 15:03:01.558200 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-08-29 15:03:01.558208 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-08-29 15:03:01.558217 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-08-29 15:03:01.558226 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-08-29 15:03:01.558234 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-08-29 15:03:01.558243 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-08-29 15:03:01.558252 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-08-29 15:03:01.558260 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-08-29 15:03:01.558269 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-08-29 15:03:01.558278 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-08-29 15:03:01.558286 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-08-29 15:03:01.558295 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-08-29 15:03:01.558304 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-08-29 15:03:01.558312 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-08-29 15:03:01.558321 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-08-29 15:03:01.558330 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-08-29 15:03:01.558339 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-08-29 15:03:01.558353 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-08-29 15:03:01.558362 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-08-29 15:03:01.558371 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-08-29 15:03:01.558380 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-08-29 15:03:01.558388 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-08-29 15:03:01.558397 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-08-29 15:03:01.558405 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-08-29 15:03:01.558414 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-08-29 15:03:01.558445 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-08-29 15:03:01.558455 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-08-29 15:03:01.558464 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-08-29 15:03:01.558472 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-08-29 15:03:01.558481 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-08-29 15:03:01.558490 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-08-29 15:03:01.558498 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-08-29 15:03:01.558507 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-08-29 15:03:01.558516 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-08-29 15:03:01.558524 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-08-29 15:03:01.558533 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-08-29 15:03:01.558541 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-08-29 15:03:01.558551 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-08-29 15:03:01.558559 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-08-29 15:03:01.558568 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-08-29 15:03:01.558577 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-08-29 15:03:01.558586 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-08-29 15:03:01.558626 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-08-29 15:03:01.558637 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-08-29 15:03:01.558646 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-08-29 15:03:01.558682 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-08-29 15:03:01.558692 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-08-29 15:03:01.558700 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-08-29 15:03:01.558709 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-08-29 15:03:01.558718 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-08-29 15:03:01.558726 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-08-29 15:03:01.558735 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-08-29 15:03:01.558743 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-08-29 15:03:01.558752 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-08-29 15:03:01.558761 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-08-29 15:03:01.558769 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-08-29 15:03:01.558778 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-08-29 15:03:01.558787 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-08-29 15:03:01.558795 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-08-29 15:03:01.558804 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-08-29 15:03:01.558813 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-08-29 15:03:01.558821 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-08-29 15:03:01.558830 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-08-29 15:03:01.558839 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-08-29 15:03:01.558847 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-08-29 15:03:01.558856 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-08-29 15:03:01.558873 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-08-29 15:03:01.558882 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-08-29 15:03:01.558913 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-08-29 15:03:01.558922 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-08-29 15:03:01.558931 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-08-29 15:03:01.558940 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-08-29 15:03:01.558948 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-08-29 15:03:01.558957 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-08-29 15:03:01.558966 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-08-29 15:03:01.558980 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-08-29 15:03:01.558988 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-08-29 15:03:01.558997 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-08-29 15:03:01.559006 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-08-29 15:03:01.559015 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-08-29 15:03:01.559024 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-08-29 15:03:01.559032 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-08-29 15:03:01.559041 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-08-29 15:03:01.559050 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-08-29 15:03:01.559059 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-08-29 15:03:01.559067 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-08-29 15:03:01.559076 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-08-29 15:03:01.559085 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-08-29 15:03:01.559094 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-08-29 15:03:01.559103 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-08-29 15:03:01.559111 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-08-29 15:03:01.559120 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-08-29 15:03:01.559129 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-08-29 15:03:01.559138 | orchestrator | 2025-08-29 15:03:01.559147 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-08-29 15:03:01.559156 | orchestrator | Friday 29 August 2025 14:54:50 +0000 (0:00:07.808) 0:03:39.451 ********* 2025-08-29 15:03:01.559165 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.559180 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.559195 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.559211 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:03:01.559225 | orchestrator | 2025-08-29 15:03:01.559284 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-08-29 15:03:01.559300 | orchestrator | Friday 29 August 2025 14:54:52 +0000 (0:00:01.616) 0:03:41.068 ********* 2025-08-29 15:03:01.559315 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-08-29 15:03:01.559330 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-08-29 15:03:01.559345 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-08-29 15:03:01.559374 | orchestrator | 2025-08-29 15:03:01.559389 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-08-29 15:03:01.559406 | orchestrator | Friday 29 August 2025 14:54:54 +0000 (0:00:01.499) 0:03:42.567 ********* 2025-08-29 15:03:01.559421 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-08-29 15:03:01.559437 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-08-29 15:03:01.559452 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-08-29 15:03:01.559466 | orchestrator | 2025-08-29 15:03:01.559481 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-08-29 15:03:01.559496 | orchestrator | Friday 29 August 2025 14:54:55 +0000 (0:00:01.770) 0:03:44.338 ********* 2025-08-29 15:03:01.559511 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.559526 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.559541 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.559554 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.559566 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.559577 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.559588 | orchestrator | 2025-08-29 15:03:01.559599 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-08-29 15:03:01.559609 | orchestrator | Friday 29 August 2025 14:54:56 +0000 (0:00:00.540) 0:03:44.879 ********* 2025-08-29 15:03:01.559620 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.559631 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.559642 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.559653 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.559717 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.559729 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.559740 | orchestrator | 2025-08-29 15:03:01.559751 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-08-29 15:03:01.559762 | orchestrator | Friday 29 August 2025 14:54:57 +0000 (0:00:00.850) 0:03:45.729 ********* 2025-08-29 15:03:01.559772 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.559783 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.559794 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.559805 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.559816 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.559826 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.559837 | orchestrator | 2025-08-29 15:03:01.559848 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-08-29 15:03:01.559859 | orchestrator | Friday 29 August 2025 14:54:57 +0000 (0:00:00.744) 0:03:46.473 ********* 2025-08-29 15:03:01.559880 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.559891 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.559902 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.559913 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.559924 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.559935 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.559946 | orchestrator | 2025-08-29 15:03:01.559957 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-08-29 15:03:01.559968 | orchestrator | Friday 29 August 2025 14:54:59 +0000 (0:00:01.252) 0:03:47.725 ********* 2025-08-29 15:03:01.559979 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.559989 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.560000 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.560011 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.560022 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.560032 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.560043 | orchestrator | 2025-08-29 15:03:01.560054 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-08-29 15:03:01.560080 | orchestrator | Friday 29 August 2025 14:55:00 +0000 (0:00:00.928) 0:03:48.654 ********* 2025-08-29 15:03:01.560091 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.560102 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.560113 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.560124 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.560134 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.560144 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.560154 | orchestrator | 2025-08-29 15:03:01.560164 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-08-29 15:03:01.560174 | orchestrator | Friday 29 August 2025 14:55:00 +0000 (0:00:00.755) 0:03:49.409 ********* 2025-08-29 15:03:01.560184 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.560193 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.560203 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.560213 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.560222 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.560232 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.560241 | orchestrator | 2025-08-29 15:03:01.560251 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-08-29 15:03:01.560261 | orchestrator | Friday 29 August 2025 14:55:01 +0000 (0:00:00.652) 0:03:50.062 ********* 2025-08-29 15:03:01.560271 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.560323 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.560335 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.560344 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.560354 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.560363 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.560373 | orchestrator | 2025-08-29 15:03:01.560382 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-08-29 15:03:01.560392 | orchestrator | Friday 29 August 2025 14:55:02 +0000 (0:00:00.869) 0:03:50.931 ********* 2025-08-29 15:03:01.560402 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.560411 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.560421 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.560430 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.560440 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.560449 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.560459 | orchestrator | 2025-08-29 15:03:01.560469 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-08-29 15:03:01.560478 | orchestrator | Friday 29 August 2025 14:55:05 +0000 (0:00:03.141) 0:03:54.073 ********* 2025-08-29 15:03:01.560488 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.560498 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.560507 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.560517 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.560526 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.560536 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.560546 | orchestrator | 2025-08-29 15:03:01.560555 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-08-29 15:03:01.560565 | orchestrator | Friday 29 August 2025 14:55:06 +0000 (0:00:00.687) 0:03:54.761 ********* 2025-08-29 15:03:01.560575 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.560584 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.560594 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.560604 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.560614 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.560623 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.560633 | orchestrator | 2025-08-29 15:03:01.560643 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-08-29 15:03:01.560652 | orchestrator | Friday 29 August 2025 14:55:07 +0000 (0:00:01.042) 0:03:55.803 ********* 2025-08-29 15:03:01.560704 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.560715 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.560725 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.560735 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.560745 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.560755 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.560764 | orchestrator | 2025-08-29 15:03:01.560774 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-08-29 15:03:01.560784 | orchestrator | Friday 29 August 2025 14:55:08 +0000 (0:00:00.762) 0:03:56.566 ********* 2025-08-29 15:03:01.560794 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-08-29 15:03:01.560804 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-08-29 15:03:01.560814 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-08-29 15:03:01.560823 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.560833 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.560843 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.560852 | orchestrator | 2025-08-29 15:03:01.560867 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-08-29 15:03:01.560877 | orchestrator | Friday 29 August 2025 14:55:09 +0000 (0:00:01.010) 0:03:57.577 ********* 2025-08-29 15:03:01.560890 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-08-29 15:03:01.560904 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-08-29 15:03:01.560916 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-08-29 15:03:01.560926 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-08-29 15:03:01.560969 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-08-29 15:03:01.560982 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-08-29 15:03:01.560992 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.561001 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.561011 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.561021 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.561030 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.561040 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.561056 | orchestrator | 2025-08-29 15:03:01.561066 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-08-29 15:03:01.561076 | orchestrator | Friday 29 August 2025 14:55:10 +0000 (0:00:01.209) 0:03:58.787 ********* 2025-08-29 15:03:01.561086 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.561096 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.561105 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.561115 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.561124 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.561134 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.561143 | orchestrator | 2025-08-29 15:03:01.561153 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-08-29 15:03:01.561162 | orchestrator | Friday 29 August 2025 14:55:11 +0000 (0:00:00.973) 0:03:59.761 ********* 2025-08-29 15:03:01.561172 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.561182 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.561191 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.561201 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.561210 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.561220 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.561230 | orchestrator | 2025-08-29 15:03:01.561240 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-08-29 15:03:01.561249 | orchestrator | Friday 29 August 2025 14:55:11 +0000 (0:00:00.575) 0:04:00.336 ********* 2025-08-29 15:03:01.561259 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.561269 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.561278 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.561287 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.561297 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.561306 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.561317 | orchestrator | 2025-08-29 15:03:01.561327 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-08-29 15:03:01.561341 | orchestrator | Friday 29 August 2025 14:55:12 +0000 (0:00:00.978) 0:04:01.315 ********* 2025-08-29 15:03:01.561358 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.561374 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.561392 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.561409 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.561426 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.561442 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.561456 | orchestrator | 2025-08-29 15:03:01.561473 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-08-29 15:03:01.561489 | orchestrator | Friday 29 August 2025 14:55:13 +0000 (0:00:00.597) 0:04:01.912 ********* 2025-08-29 15:03:01.561505 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.561529 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.561547 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.561564 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.561580 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.561590 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.561600 | orchestrator | 2025-08-29 15:03:01.561610 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-08-29 15:03:01.561619 | orchestrator | Friday 29 August 2025 14:55:14 +0000 (0:00:01.074) 0:04:02.987 ********* 2025-08-29 15:03:01.561629 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.561639 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.561648 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.561679 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.561690 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.561699 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.561709 | orchestrator | 2025-08-29 15:03:01.561718 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-08-29 15:03:01.561737 | orchestrator | Friday 29 August 2025 14:55:15 +0000 (0:00:00.835) 0:04:03.823 ********* 2025-08-29 15:03:01.561747 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 15:03:01.561757 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 15:03:01.561766 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 15:03:01.561776 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.561785 | orchestrator | 2025-08-29 15:03:01.561795 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-08-29 15:03:01.561804 | orchestrator | Friday 29 August 2025 14:55:16 +0000 (0:00:00.795) 0:04:04.618 ********* 2025-08-29 15:03:01.561814 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 15:03:01.561823 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 15:03:01.561833 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 15:03:01.561842 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.561852 | orchestrator | 2025-08-29 15:03:01.561861 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-08-29 15:03:01.561871 | orchestrator | Friday 29 August 2025 14:55:16 +0000 (0:00:00.687) 0:04:05.306 ********* 2025-08-29 15:03:01.561881 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 15:03:01.561890 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 15:03:01.561940 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 15:03:01.561952 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.561962 | orchestrator | 2025-08-29 15:03:01.561971 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-08-29 15:03:01.561981 | orchestrator | Friday 29 August 2025 14:55:17 +0000 (0:00:00.937) 0:04:06.243 ********* 2025-08-29 15:03:01.561991 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.562001 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.562010 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.562051 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.562061 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.562071 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.562081 | orchestrator | 2025-08-29 15:03:01.562090 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-08-29 15:03:01.562100 | orchestrator | Friday 29 August 2025 14:55:18 +0000 (0:00:00.880) 0:04:07.124 ********* 2025-08-29 15:03:01.562109 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-08-29 15:03:01.562119 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-08-29 15:03:01.562129 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-08-29 15:03:01.562139 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-08-29 15:03:01.562148 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-08-29 15:03:01.562158 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.562167 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.562177 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-08-29 15:03:01.562187 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.562196 | orchestrator | 2025-08-29 15:03:01.562206 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-08-29 15:03:01.562216 | orchestrator | Friday 29 August 2025 14:55:21 +0000 (0:00:02.799) 0:04:09.924 ********* 2025-08-29 15:03:01.562225 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:03:01.562235 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:03:01.562244 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:03:01.562254 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:03:01.562263 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:03:01.562276 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:03:01.562293 | orchestrator | 2025-08-29 15:03:01.562308 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-08-29 15:03:01.562323 | orchestrator | Friday 29 August 2025 14:55:24 +0000 (0:00:03.025) 0:04:12.949 ********* 2025-08-29 15:03:01.562340 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:03:01.562366 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:03:01.562379 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:03:01.562389 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:03:01.562399 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:03:01.562408 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:03:01.562418 | orchestrator | 2025-08-29 15:03:01.562428 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-08-29 15:03:01.562438 | orchestrator | Friday 29 August 2025 14:55:25 +0000 (0:00:01.010) 0:04:13.959 ********* 2025-08-29 15:03:01.562447 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.562457 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.562466 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.562476 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:03:01.562486 | orchestrator | 2025-08-29 15:03:01.562496 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-08-29 15:03:01.562506 | orchestrator | Friday 29 August 2025 14:55:26 +0000 (0:00:01.061) 0:04:15.021 ********* 2025-08-29 15:03:01.562515 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.562525 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:01.562535 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.562544 | orchestrator | 2025-08-29 15:03:01.562560 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-08-29 15:03:01.562570 | orchestrator | Friday 29 August 2025 14:55:26 +0000 (0:00:00.365) 0:04:15.386 ********* 2025-08-29 15:03:01.562579 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:03:01.562589 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:03:01.562599 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:03:01.562608 | orchestrator | 2025-08-29 15:03:01.562618 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-08-29 15:03:01.562628 | orchestrator | Friday 29 August 2025 14:55:28 +0000 (0:00:01.343) 0:04:16.730 ********* 2025-08-29 15:03:01.562637 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-08-29 15:03:01.562647 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-08-29 15:03:01.562765 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-08-29 15:03:01.562777 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.562787 | orchestrator | 2025-08-29 15:03:01.562797 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-08-29 15:03:01.562806 | orchestrator | Friday 29 August 2025 14:55:29 +0000 (0:00:00.884) 0:04:17.614 ********* 2025-08-29 15:03:01.562816 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.562826 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:01.562835 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.562844 | orchestrator | 2025-08-29 15:03:01.562852 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-08-29 15:03:01.562860 | orchestrator | Friday 29 August 2025 14:55:29 +0000 (0:00:00.538) 0:04:18.153 ********* 2025-08-29 15:03:01.562868 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.562876 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.562884 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.562891 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:03:01.562899 | orchestrator | 2025-08-29 15:03:01.562907 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-08-29 15:03:01.562915 | orchestrator | Friday 29 August 2025 14:55:30 +0000 (0:00:00.876) 0:04:19.029 ********* 2025-08-29 15:03:01.562923 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 15:03:01.562931 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 15:03:01.562971 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 15:03:01.562980 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.562997 | orchestrator | 2025-08-29 15:03:01.563005 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-08-29 15:03:01.563013 | orchestrator | Friday 29 August 2025 14:55:30 +0000 (0:00:00.386) 0:04:19.415 ********* 2025-08-29 15:03:01.563021 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.563028 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.563036 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.563044 | orchestrator | 2025-08-29 15:03:01.563052 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-08-29 15:03:01.563060 | orchestrator | Friday 29 August 2025 14:55:31 +0000 (0:00:00.544) 0:04:19.960 ********* 2025-08-29 15:03:01.563068 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.563076 | orchestrator | 2025-08-29 15:03:01.563084 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-08-29 15:03:01.563092 | orchestrator | Friday 29 August 2025 14:55:31 +0000 (0:00:00.217) 0:04:20.177 ********* 2025-08-29 15:03:01.563099 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.563107 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.563115 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.563123 | orchestrator | 2025-08-29 15:03:01.563131 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-08-29 15:03:01.563138 | orchestrator | Friday 29 August 2025 14:55:32 +0000 (0:00:00.369) 0:04:20.546 ********* 2025-08-29 15:03:01.563146 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.563154 | orchestrator | 2025-08-29 15:03:01.563162 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-08-29 15:03:01.563170 | orchestrator | Friday 29 August 2025 14:55:32 +0000 (0:00:00.228) 0:04:20.775 ********* 2025-08-29 15:03:01.563178 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.563186 | orchestrator | 2025-08-29 15:03:01.563194 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-08-29 15:03:01.563201 | orchestrator | Friday 29 August 2025 14:55:32 +0000 (0:00:00.208) 0:04:20.983 ********* 2025-08-29 15:03:01.563209 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.563217 | orchestrator | 2025-08-29 15:03:01.563225 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-08-29 15:03:01.563233 | orchestrator | Friday 29 August 2025 14:55:32 +0000 (0:00:00.119) 0:04:21.103 ********* 2025-08-29 15:03:01.563241 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.563248 | orchestrator | 2025-08-29 15:03:01.563256 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-08-29 15:03:01.563264 | orchestrator | Friday 29 August 2025 14:55:32 +0000 (0:00:00.224) 0:04:21.327 ********* 2025-08-29 15:03:01.563272 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.563280 | orchestrator | 2025-08-29 15:03:01.563288 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-08-29 15:03:01.563296 | orchestrator | Friday 29 August 2025 14:55:33 +0000 (0:00:00.216) 0:04:21.544 ********* 2025-08-29 15:03:01.563303 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 15:03:01.563312 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 15:03:01.563320 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 15:03:01.563328 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.563336 | orchestrator | 2025-08-29 15:03:01.563344 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-08-29 15:03:01.563352 | orchestrator | Friday 29 August 2025 14:55:33 +0000 (0:00:00.634) 0:04:22.179 ********* 2025-08-29 15:03:01.563359 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.563367 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.563380 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.563388 | orchestrator | 2025-08-29 15:03:01.563396 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-08-29 15:03:01.563404 | orchestrator | Friday 29 August 2025 14:55:34 +0000 (0:00:00.577) 0:04:22.757 ********* 2025-08-29 15:03:01.563418 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.563426 | orchestrator | 2025-08-29 15:03:01.563434 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-08-29 15:03:01.563442 | orchestrator | Friday 29 August 2025 14:55:34 +0000 (0:00:00.250) 0:04:23.007 ********* 2025-08-29 15:03:01.563450 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.563458 | orchestrator | 2025-08-29 15:03:01.563465 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-08-29 15:03:01.563473 | orchestrator | Friday 29 August 2025 14:55:34 +0000 (0:00:00.272) 0:04:23.280 ********* 2025-08-29 15:03:01.563481 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.563489 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.563497 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.563505 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:03:01.563513 | orchestrator | 2025-08-29 15:03:01.563521 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-08-29 15:03:01.563528 | orchestrator | Friday 29 August 2025 14:55:35 +0000 (0:00:01.070) 0:04:24.350 ********* 2025-08-29 15:03:01.563536 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.563544 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.563552 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.563560 | orchestrator | 2025-08-29 15:03:01.563568 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-08-29 15:03:01.563576 | orchestrator | Friday 29 August 2025 14:55:36 +0000 (0:00:00.332) 0:04:24.683 ********* 2025-08-29 15:03:01.563583 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:03:01.563591 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:03:01.563599 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:03:01.563607 | orchestrator | 2025-08-29 15:03:01.563615 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-08-29 15:03:01.563623 | orchestrator | Friday 29 August 2025 14:55:37 +0000 (0:00:01.168) 0:04:25.852 ********* 2025-08-29 15:03:01.563630 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 15:03:01.563680 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 15:03:01.563690 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 15:03:01.563698 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.563706 | orchestrator | 2025-08-29 15:03:01.563714 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-08-29 15:03:01.563722 | orchestrator | Friday 29 August 2025 14:55:38 +0000 (0:00:00.703) 0:04:26.555 ********* 2025-08-29 15:03:01.563730 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.563738 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.563746 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.563759 | orchestrator | 2025-08-29 15:03:01.563772 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-08-29 15:03:01.563786 | orchestrator | Friday 29 August 2025 14:55:38 +0000 (0:00:00.289) 0:04:26.844 ********* 2025-08-29 15:03:01.563798 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.563811 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.563825 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.563838 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:03:01.563852 | orchestrator | 2025-08-29 15:03:01.563866 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-08-29 15:03:01.563878 | orchestrator | Friday 29 August 2025 14:55:39 +0000 (0:00:00.926) 0:04:27.771 ********* 2025-08-29 15:03:01.563891 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.563904 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.563917 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.563930 | orchestrator | 2025-08-29 15:03:01.563945 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-08-29 15:03:01.563954 | orchestrator | Friday 29 August 2025 14:55:39 +0000 (0:00:00.311) 0:04:28.082 ********* 2025-08-29 15:03:01.563969 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:03:01.563977 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:03:01.563985 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:03:01.563992 | orchestrator | 2025-08-29 15:03:01.564000 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-08-29 15:03:01.564008 | orchestrator | Friday 29 August 2025 14:55:41 +0000 (0:00:01.542) 0:04:29.625 ********* 2025-08-29 15:03:01.564016 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 15:03:01.564024 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 15:03:01.564032 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 15:03:01.564040 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.564047 | orchestrator | 2025-08-29 15:03:01.564055 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-08-29 15:03:01.564063 | orchestrator | Friday 29 August 2025 14:55:41 +0000 (0:00:00.646) 0:04:30.271 ********* 2025-08-29 15:03:01.564073 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.564087 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.564100 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.564114 | orchestrator | 2025-08-29 15:03:01.564127 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-08-29 15:03:01.564139 | orchestrator | Friday 29 August 2025 14:55:42 +0000 (0:00:00.373) 0:04:30.645 ********* 2025-08-29 15:03:01.564153 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.564167 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.564180 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.564193 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.564208 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.564216 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.564224 | orchestrator | 2025-08-29 15:03:01.564232 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-08-29 15:03:01.564245 | orchestrator | Friday 29 August 2025 14:55:42 +0000 (0:00:00.665) 0:04:31.311 ********* 2025-08-29 15:03:01.564253 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.564261 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.564269 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.564276 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:03:01.564284 | orchestrator | 2025-08-29 15:03:01.564292 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-08-29 15:03:01.564300 | orchestrator | Friday 29 August 2025 14:55:44 +0000 (0:00:01.285) 0:04:32.597 ********* 2025-08-29 15:03:01.564308 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.564316 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:01.564323 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.564331 | orchestrator | 2025-08-29 15:03:01.564339 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-08-29 15:03:01.564347 | orchestrator | Friday 29 August 2025 14:55:44 +0000 (0:00:00.612) 0:04:33.209 ********* 2025-08-29 15:03:01.564354 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:03:01.564362 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:03:01.564370 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:03:01.564378 | orchestrator | 2025-08-29 15:03:01.564386 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-08-29 15:03:01.564393 | orchestrator | Friday 29 August 2025 14:55:46 +0000 (0:00:01.732) 0:04:34.941 ********* 2025-08-29 15:03:01.564401 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-08-29 15:03:01.564409 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-08-29 15:03:01.564417 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-08-29 15:03:01.564424 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.564432 | orchestrator | 2025-08-29 15:03:01.564440 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-08-29 15:03:01.564455 | orchestrator | Friday 29 August 2025 14:55:47 +0000 (0:00:00.650) 0:04:35.592 ********* 2025-08-29 15:03:01.564462 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.564470 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:01.564478 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.564486 | orchestrator | 2025-08-29 15:03:01.564494 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-08-29 15:03:01.564502 | orchestrator | 2025-08-29 15:03:01.564510 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-08-29 15:03:01.564549 | orchestrator | Friday 29 August 2025 14:55:47 +0000 (0:00:00.564) 0:04:36.156 ********* 2025-08-29 15:03:01.564559 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:03:01.564567 | orchestrator | 2025-08-29 15:03:01.564575 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-08-29 15:03:01.564583 | orchestrator | Friday 29 August 2025 14:55:48 +0000 (0:00:00.726) 0:04:36.883 ********* 2025-08-29 15:03:01.564591 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:03:01.564599 | orchestrator | 2025-08-29 15:03:01.564607 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-08-29 15:03:01.564615 | orchestrator | Friday 29 August 2025 14:55:48 +0000 (0:00:00.523) 0:04:37.407 ********* 2025-08-29 15:03:01.564622 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.564630 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:01.564638 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.564646 | orchestrator | 2025-08-29 15:03:01.564654 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-08-29 15:03:01.564677 | orchestrator | Friday 29 August 2025 14:55:49 +0000 (0:00:00.694) 0:04:38.101 ********* 2025-08-29 15:03:01.564686 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.564694 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.564702 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.564709 | orchestrator | 2025-08-29 15:03:01.564717 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-08-29 15:03:01.564725 | orchestrator | Friday 29 August 2025 14:55:50 +0000 (0:00:00.543) 0:04:38.644 ********* 2025-08-29 15:03:01.564733 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.564741 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.564749 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.564756 | orchestrator | 2025-08-29 15:03:01.564764 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-08-29 15:03:01.564772 | orchestrator | Friday 29 August 2025 14:55:50 +0000 (0:00:00.321) 0:04:38.966 ********* 2025-08-29 15:03:01.564780 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.564788 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.564796 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.564803 | orchestrator | 2025-08-29 15:03:01.564811 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-08-29 15:03:01.564819 | orchestrator | Friday 29 August 2025 14:55:50 +0000 (0:00:00.302) 0:04:39.269 ********* 2025-08-29 15:03:01.564827 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.564835 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:01.564843 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.564850 | orchestrator | 2025-08-29 15:03:01.564858 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-08-29 15:03:01.564866 | orchestrator | Friday 29 August 2025 14:55:51 +0000 (0:00:00.750) 0:04:40.020 ********* 2025-08-29 15:03:01.564874 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.564882 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.564891 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.564905 | orchestrator | 2025-08-29 15:03:01.564917 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-08-29 15:03:01.564939 | orchestrator | Friday 29 August 2025 14:55:51 +0000 (0:00:00.363) 0:04:40.383 ********* 2025-08-29 15:03:01.564953 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.564963 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.564971 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.564979 | orchestrator | 2025-08-29 15:03:01.564995 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-08-29 15:03:01.565004 | orchestrator | Friday 29 August 2025 14:55:52 +0000 (0:00:00.591) 0:04:40.974 ********* 2025-08-29 15:03:01.565011 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.565019 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:01.565027 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.565037 | orchestrator | 2025-08-29 15:03:01.565050 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-08-29 15:03:01.565063 | orchestrator | Friday 29 August 2025 14:55:53 +0000 (0:00:00.758) 0:04:41.733 ********* 2025-08-29 15:03:01.565075 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.565087 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:01.565099 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.565111 | orchestrator | 2025-08-29 15:03:01.565124 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-08-29 15:03:01.565138 | orchestrator | Friday 29 August 2025 14:55:54 +0000 (0:00:00.801) 0:04:42.534 ********* 2025-08-29 15:03:01.565152 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.565165 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.565179 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.565193 | orchestrator | 2025-08-29 15:03:01.565206 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-08-29 15:03:01.565219 | orchestrator | Friday 29 August 2025 14:55:54 +0000 (0:00:00.355) 0:04:42.889 ********* 2025-08-29 15:03:01.565228 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.565235 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:01.565243 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.565251 | orchestrator | 2025-08-29 15:03:01.565259 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-08-29 15:03:01.565266 | orchestrator | Friday 29 August 2025 14:55:55 +0000 (0:00:00.632) 0:04:43.522 ********* 2025-08-29 15:03:01.565274 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.565282 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.565289 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.565297 | orchestrator | 2025-08-29 15:03:01.565305 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-08-29 15:03:01.565313 | orchestrator | Friday 29 August 2025 14:55:55 +0000 (0:00:00.302) 0:04:43.824 ********* 2025-08-29 15:03:01.565320 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.565328 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.565336 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.565344 | orchestrator | 2025-08-29 15:03:01.565351 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-08-29 15:03:01.565391 | orchestrator | Friday 29 August 2025 14:55:55 +0000 (0:00:00.319) 0:04:44.144 ********* 2025-08-29 15:03:01.565401 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.565409 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.565417 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.565425 | orchestrator | 2025-08-29 15:03:01.565433 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-08-29 15:03:01.565441 | orchestrator | Friday 29 August 2025 14:55:56 +0000 (0:00:00.372) 0:04:44.516 ********* 2025-08-29 15:03:01.565449 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.565457 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.565464 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.565472 | orchestrator | 2025-08-29 15:03:01.565480 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-08-29 15:03:01.565488 | orchestrator | Friday 29 August 2025 14:55:56 +0000 (0:00:00.661) 0:04:45.178 ********* 2025-08-29 15:03:01.565504 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.565511 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.565519 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.565527 | orchestrator | 2025-08-29 15:03:01.565535 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-08-29 15:03:01.565543 | orchestrator | Friday 29 August 2025 14:55:57 +0000 (0:00:00.338) 0:04:45.516 ********* 2025-08-29 15:03:01.565551 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.565562 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:01.565576 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.565590 | orchestrator | 2025-08-29 15:03:01.565604 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-08-29 15:03:01.565618 | orchestrator | Friday 29 August 2025 14:55:57 +0000 (0:00:00.326) 0:04:45.843 ********* 2025-08-29 15:03:01.565632 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:01.565642 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.565650 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.565699 | orchestrator | 2025-08-29 15:03:01.565708 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-08-29 15:03:01.565716 | orchestrator | Friday 29 August 2025 14:55:57 +0000 (0:00:00.371) 0:04:46.214 ********* 2025-08-29 15:03:01.565724 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.565732 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:01.565740 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.565748 | orchestrator | 2025-08-29 15:03:01.565756 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-08-29 15:03:01.565764 | orchestrator | Friday 29 August 2025 14:55:58 +0000 (0:00:00.836) 0:04:47.050 ********* 2025-08-29 15:03:01.565772 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.565779 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:01.565787 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.565797 | orchestrator | 2025-08-29 15:03:01.565811 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-08-29 15:03:01.565824 | orchestrator | Friday 29 August 2025 14:55:58 +0000 (0:00:00.343) 0:04:47.394 ********* 2025-08-29 15:03:01.565837 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:03:01.565851 | orchestrator | 2025-08-29 15:03:01.565864 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-08-29 15:03:01.565878 | orchestrator | Friday 29 August 2025 14:55:59 +0000 (0:00:00.829) 0:04:48.224 ********* 2025-08-29 15:03:01.565891 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.565903 | orchestrator | 2025-08-29 15:03:01.565911 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-08-29 15:03:01.565919 | orchestrator | Friday 29 August 2025 14:55:59 +0000 (0:00:00.168) 0:04:48.392 ********* 2025-08-29 15:03:01.565933 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-08-29 15:03:01.565942 | orchestrator | 2025-08-29 15:03:01.565950 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-08-29 15:03:01.565957 | orchestrator | Friday 29 August 2025 14:56:00 +0000 (0:00:01.088) 0:04:49.481 ********* 2025-08-29 15:03:01.565965 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.565973 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:01.565981 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.565988 | orchestrator | 2025-08-29 15:03:01.565996 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-08-29 15:03:01.566004 | orchestrator | Friday 29 August 2025 14:56:01 +0000 (0:00:00.341) 0:04:49.822 ********* 2025-08-29 15:03:01.566012 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.566047 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:01.566055 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.566063 | orchestrator | 2025-08-29 15:03:01.566071 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-08-29 15:03:01.566079 | orchestrator | Friday 29 August 2025 14:56:01 +0000 (0:00:00.357) 0:04:50.179 ********* 2025-08-29 15:03:01.566094 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:03:01.566102 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:03:01.566116 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:03:01.566130 | orchestrator | 2025-08-29 15:03:01.566144 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-08-29 15:03:01.566158 | orchestrator | Friday 29 August 2025 14:56:02 +0000 (0:00:01.166) 0:04:51.345 ********* 2025-08-29 15:03:01.566172 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:03:01.566185 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:03:01.566200 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:03:01.566213 | orchestrator | 2025-08-29 15:03:01.566226 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-08-29 15:03:01.566233 | orchestrator | Friday 29 August 2025 14:56:03 +0000 (0:00:01.124) 0:04:52.470 ********* 2025-08-29 15:03:01.566240 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:03:01.566246 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:03:01.566253 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:03:01.566259 | orchestrator | 2025-08-29 15:03:01.566266 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-08-29 15:03:01.566273 | orchestrator | Friday 29 August 2025 14:56:04 +0000 (0:00:00.732) 0:04:53.202 ********* 2025-08-29 15:03:01.566279 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:01.566286 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.566292 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.566299 | orchestrator | 2025-08-29 15:03:01.566338 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-08-29 15:03:01.566351 | orchestrator | Friday 29 August 2025 14:56:05 +0000 (0:00:00.875) 0:04:54.078 ********* 2025-08-29 15:03:01.566363 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:03:01.566374 | orchestrator | 2025-08-29 15:03:01.566385 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-08-29 15:03:01.566396 | orchestrator | Friday 29 August 2025 14:56:06 +0000 (0:00:01.355) 0:04:55.433 ********* 2025-08-29 15:03:01.566408 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.566415 | orchestrator | 2025-08-29 15:03:01.566422 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-08-29 15:03:01.566429 | orchestrator | Friday 29 August 2025 14:56:07 +0000 (0:00:00.757) 0:04:56.191 ********* 2025-08-29 15:03:01.566435 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 15:03:01.566442 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:03:01.566449 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:03:01.566456 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-08-29 15:03:01.566462 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-08-29 15:03:01.566469 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-08-29 15:03:01.566476 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-08-29 15:03:01.566482 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-08-29 15:03:01.566489 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-08-29 15:03:01.566495 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2025-08-29 15:03:01.566502 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-08-29 15:03:01.566509 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-08-29 15:03:01.566515 | orchestrator | 2025-08-29 15:03:01.566522 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-08-29 15:03:01.566529 | orchestrator | Friday 29 August 2025 14:56:11 +0000 (0:00:03.778) 0:04:59.970 ********* 2025-08-29 15:03:01.566535 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:03:01.566542 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:03:01.566548 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:03:01.566555 | orchestrator | 2025-08-29 15:03:01.566567 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-08-29 15:03:01.566574 | orchestrator | Friday 29 August 2025 14:56:13 +0000 (0:00:01.644) 0:05:01.615 ********* 2025-08-29 15:03:01.566581 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.566587 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:01.566594 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.566601 | orchestrator | 2025-08-29 15:03:01.566607 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-08-29 15:03:01.566614 | orchestrator | Friday 29 August 2025 14:56:13 +0000 (0:00:00.386) 0:05:02.001 ********* 2025-08-29 15:03:01.566621 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.566627 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:01.566634 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.566640 | orchestrator | 2025-08-29 15:03:01.566647 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-08-29 15:03:01.566673 | orchestrator | Friday 29 August 2025 14:56:13 +0000 (0:00:00.325) 0:05:02.326 ********* 2025-08-29 15:03:01.566686 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:03:01.566697 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:03:01.566709 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:03:01.566719 | orchestrator | 2025-08-29 15:03:01.566737 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-08-29 15:03:01.566749 | orchestrator | Friday 29 August 2025 14:56:16 +0000 (0:00:02.285) 0:05:04.611 ********* 2025-08-29 15:03:01.566760 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:03:01.566770 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:03:01.566781 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:03:01.566793 | orchestrator | 2025-08-29 15:03:01.566803 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-08-29 15:03:01.566815 | orchestrator | Friday 29 August 2025 14:56:18 +0000 (0:00:01.890) 0:05:06.501 ********* 2025-08-29 15:03:01.566822 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.566829 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.566835 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.566842 | orchestrator | 2025-08-29 15:03:01.566849 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-08-29 15:03:01.566855 | orchestrator | Friday 29 August 2025 14:56:18 +0000 (0:00:00.342) 0:05:06.844 ********* 2025-08-29 15:03:01.566862 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:03:01.566873 | orchestrator | 2025-08-29 15:03:01.566883 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-08-29 15:03:01.566894 | orchestrator | Friday 29 August 2025 14:56:18 +0000 (0:00:00.559) 0:05:07.403 ********* 2025-08-29 15:03:01.566904 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.566916 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.566927 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.566937 | orchestrator | 2025-08-29 15:03:01.566947 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-08-29 15:03:01.566953 | orchestrator | Friday 29 August 2025 14:56:19 +0000 (0:00:00.546) 0:05:07.950 ********* 2025-08-29 15:03:01.566960 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.566966 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.566973 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.566979 | orchestrator | 2025-08-29 15:03:01.566986 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-08-29 15:03:01.566993 | orchestrator | Friday 29 August 2025 14:56:19 +0000 (0:00:00.330) 0:05:08.280 ********* 2025-08-29 15:03:01.566999 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:03:01.567006 | orchestrator | 2025-08-29 15:03:01.567041 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-08-29 15:03:01.567049 | orchestrator | Friday 29 August 2025 14:56:20 +0000 (0:00:00.605) 0:05:08.886 ********* 2025-08-29 15:03:01.567062 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:03:01.567069 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:03:01.567076 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:03:01.567082 | orchestrator | 2025-08-29 15:03:01.567089 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-08-29 15:03:01.567096 | orchestrator | Friday 29 August 2025 14:56:22 +0000 (0:00:02.381) 0:05:11.268 ********* 2025-08-29 15:03:01.567102 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:03:01.567109 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:03:01.567116 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:03:01.567122 | orchestrator | 2025-08-29 15:03:01.567129 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-08-29 15:03:01.567135 | orchestrator | Friday 29 August 2025 14:56:24 +0000 (0:00:01.240) 0:05:12.509 ********* 2025-08-29 15:03:01.567142 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:03:01.567149 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:03:01.567155 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:03:01.567162 | orchestrator | 2025-08-29 15:03:01.567168 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-08-29 15:03:01.567175 | orchestrator | Friday 29 August 2025 14:56:25 +0000 (0:00:01.753) 0:05:14.262 ********* 2025-08-29 15:03:01.567182 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:03:01.567188 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:03:01.567195 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:03:01.567202 | orchestrator | 2025-08-29 15:03:01.567208 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-08-29 15:03:01.567219 | orchestrator | Friday 29 August 2025 14:56:27 +0000 (0:00:02.064) 0:05:16.327 ********* 2025-08-29 15:03:01.567230 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:03:01.567241 | orchestrator | 2025-08-29 15:03:01.567252 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-08-29 15:03:01.567263 | orchestrator | Friday 29 August 2025 14:56:28 +0000 (0:00:00.868) 0:05:17.196 ********* 2025-08-29 15:03:01.567274 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.567285 | orchestrator | 2025-08-29 15:03:01.567297 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-08-29 15:03:01.567307 | orchestrator | Friday 29 August 2025 14:56:29 +0000 (0:00:01.242) 0:05:18.438 ********* 2025-08-29 15:03:01.567319 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.567331 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:01.567342 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.567351 | orchestrator | 2025-08-29 15:03:01.567358 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-08-29 15:03:01.567364 | orchestrator | Friday 29 August 2025 14:56:39 +0000 (0:00:09.838) 0:05:28.277 ********* 2025-08-29 15:03:01.567375 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.567386 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.567397 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.567408 | orchestrator | 2025-08-29 15:03:01.567419 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-08-29 15:03:01.567430 | orchestrator | Friday 29 August 2025 14:56:40 +0000 (0:00:00.363) 0:05:28.640 ********* 2025-08-29 15:03:01.567449 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__406317d949d6b2e1d25f544337e3dd3fa90ef29a'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-08-29 15:03:01.567463 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__406317d949d6b2e1d25f544337e3dd3fa90ef29a'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-08-29 15:03:01.567480 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__406317d949d6b2e1d25f544337e3dd3fa90ef29a'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-08-29 15:03:01.567489 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__406317d949d6b2e1d25f544337e3dd3fa90ef29a'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-08-29 15:03:01.567522 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__406317d949d6b2e1d25f544337e3dd3fa90ef29a'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-08-29 15:03:01.567531 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__406317d949d6b2e1d25f544337e3dd3fa90ef29a'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__406317d949d6b2e1d25f544337e3dd3fa90ef29a'}])  2025-08-29 15:03:01.567540 | orchestrator | 2025-08-29 15:03:01.567547 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-08-29 15:03:01.567553 | orchestrator | Friday 29 August 2025 14:56:54 +0000 (0:00:14.457) 0:05:43.098 ********* 2025-08-29 15:03:01.567560 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.567566 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.567573 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.567579 | orchestrator | 2025-08-29 15:03:01.567586 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-08-29 15:03:01.567593 | orchestrator | Friday 29 August 2025 14:56:55 +0000 (0:00:00.441) 0:05:43.539 ********* 2025-08-29 15:03:01.567599 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:03:01.567606 | orchestrator | 2025-08-29 15:03:01.567612 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-08-29 15:03:01.567619 | orchestrator | Friday 29 August 2025 14:56:55 +0000 (0:00:00.801) 0:05:44.341 ********* 2025-08-29 15:03:01.567626 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.567632 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:01.567639 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.567645 | orchestrator | 2025-08-29 15:03:01.567652 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-08-29 15:03:01.567716 | orchestrator | Friday 29 August 2025 14:56:56 +0000 (0:00:00.372) 0:05:44.713 ********* 2025-08-29 15:03:01.567724 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.567731 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.567738 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.567744 | orchestrator | 2025-08-29 15:03:01.567751 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-08-29 15:03:01.567758 | orchestrator | Friday 29 August 2025 14:56:56 +0000 (0:00:00.358) 0:05:45.071 ********* 2025-08-29 15:03:01.567764 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-08-29 15:03:01.567771 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-08-29 15:03:01.567778 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-08-29 15:03:01.567791 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.567797 | orchestrator | 2025-08-29 15:03:01.567804 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-08-29 15:03:01.567811 | orchestrator | Friday 29 August 2025 14:56:57 +0000 (0:00:00.886) 0:05:45.958 ********* 2025-08-29 15:03:01.567818 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.567824 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:01.567830 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.567836 | orchestrator | 2025-08-29 15:03:01.567843 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-08-29 15:03:01.567849 | orchestrator | 2025-08-29 15:03:01.567855 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-08-29 15:03:01.567866 | orchestrator | Friday 29 August 2025 14:56:58 +0000 (0:00:00.809) 0:05:46.767 ********* 2025-08-29 15:03:01.567872 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:03:01.567879 | orchestrator | 2025-08-29 15:03:01.567885 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-08-29 15:03:01.567892 | orchestrator | Friday 29 August 2025 14:56:58 +0000 (0:00:00.536) 0:05:47.304 ********* 2025-08-29 15:03:01.567898 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:03:01.567904 | orchestrator | 2025-08-29 15:03:01.567910 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-08-29 15:03:01.567916 | orchestrator | Friday 29 August 2025 14:56:59 +0000 (0:00:00.778) 0:05:48.082 ********* 2025-08-29 15:03:01.567922 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.567928 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:01.567935 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.567941 | orchestrator | 2025-08-29 15:03:01.567947 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-08-29 15:03:01.567953 | orchestrator | Friday 29 August 2025 14:57:00 +0000 (0:00:00.736) 0:05:48.818 ********* 2025-08-29 15:03:01.567960 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.567966 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.567972 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.567978 | orchestrator | 2025-08-29 15:03:01.567984 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-08-29 15:03:01.567991 | orchestrator | Friday 29 August 2025 14:57:00 +0000 (0:00:00.308) 0:05:49.126 ********* 2025-08-29 15:03:01.567997 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.568003 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.568009 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.568015 | orchestrator | 2025-08-29 15:03:01.568021 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-08-29 15:03:01.568027 | orchestrator | Friday 29 August 2025 14:57:00 +0000 (0:00:00.330) 0:05:49.456 ********* 2025-08-29 15:03:01.568034 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.568040 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.568046 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.568052 | orchestrator | 2025-08-29 15:03:01.568058 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-08-29 15:03:01.568087 | orchestrator | Friday 29 August 2025 14:57:01 +0000 (0:00:00.337) 0:05:49.794 ********* 2025-08-29 15:03:01.568094 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.568101 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:01.568107 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.568113 | orchestrator | 2025-08-29 15:03:01.568119 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-08-29 15:03:01.568125 | orchestrator | Friday 29 August 2025 14:57:02 +0000 (0:00:00.960) 0:05:50.754 ********* 2025-08-29 15:03:01.568131 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.568138 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.568144 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.568154 | orchestrator | 2025-08-29 15:03:01.568160 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-08-29 15:03:01.568167 | orchestrator | Friday 29 August 2025 14:57:02 +0000 (0:00:00.335) 0:05:51.089 ********* 2025-08-29 15:03:01.568173 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.568179 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.568185 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.568192 | orchestrator | 2025-08-29 15:03:01.568198 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-08-29 15:03:01.568204 | orchestrator | Friday 29 August 2025 14:57:02 +0000 (0:00:00.326) 0:05:51.416 ********* 2025-08-29 15:03:01.568210 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.568216 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:01.568222 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.568229 | orchestrator | 2025-08-29 15:03:01.568235 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-08-29 15:03:01.568241 | orchestrator | Friday 29 August 2025 14:57:03 +0000 (0:00:00.720) 0:05:52.137 ********* 2025-08-29 15:03:01.568247 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.568253 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:01.568259 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.568266 | orchestrator | 2025-08-29 15:03:01.568272 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-08-29 15:03:01.568278 | orchestrator | Friday 29 August 2025 14:57:04 +0000 (0:00:01.049) 0:05:53.187 ********* 2025-08-29 15:03:01.568284 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.568290 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.568297 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.568303 | orchestrator | 2025-08-29 15:03:01.568309 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-08-29 15:03:01.568315 | orchestrator | Friday 29 August 2025 14:57:05 +0000 (0:00:00.306) 0:05:53.493 ********* 2025-08-29 15:03:01.568321 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.568327 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:01.568334 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.568340 | orchestrator | 2025-08-29 15:03:01.568346 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-08-29 15:03:01.568352 | orchestrator | Friday 29 August 2025 14:57:05 +0000 (0:00:00.358) 0:05:53.852 ********* 2025-08-29 15:03:01.568358 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.568365 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.568371 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.568377 | orchestrator | 2025-08-29 15:03:01.568383 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-08-29 15:03:01.568389 | orchestrator | Friday 29 August 2025 14:57:05 +0000 (0:00:00.331) 0:05:54.183 ********* 2025-08-29 15:03:01.568395 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.568402 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.568408 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.568414 | orchestrator | 2025-08-29 15:03:01.568420 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-08-29 15:03:01.568427 | orchestrator | Friday 29 August 2025 14:57:06 +0000 (0:00:00.554) 0:05:54.738 ********* 2025-08-29 15:03:01.568433 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.568439 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.568446 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.568452 | orchestrator | 2025-08-29 15:03:01.568458 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-08-29 15:03:01.568464 | orchestrator | Friday 29 August 2025 14:57:06 +0000 (0:00:00.318) 0:05:55.056 ********* 2025-08-29 15:03:01.568470 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.568477 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.568483 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.568489 | orchestrator | 2025-08-29 15:03:01.568495 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-08-29 15:03:01.568505 | orchestrator | Friday 29 August 2025 14:57:06 +0000 (0:00:00.335) 0:05:55.392 ********* 2025-08-29 15:03:01.568511 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.568518 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.568524 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.568530 | orchestrator | 2025-08-29 15:03:01.568536 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-08-29 15:03:01.568542 | orchestrator | Friday 29 August 2025 14:57:07 +0000 (0:00:00.320) 0:05:55.712 ********* 2025-08-29 15:03:01.568549 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:01.568555 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.568561 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.568567 | orchestrator | 2025-08-29 15:03:01.568574 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-08-29 15:03:01.568580 | orchestrator | Friday 29 August 2025 14:57:07 +0000 (0:00:00.657) 0:05:56.370 ********* 2025-08-29 15:03:01.568586 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.568592 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:01.568598 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.568605 | orchestrator | 2025-08-29 15:03:01.568611 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-08-29 15:03:01.568617 | orchestrator | Friday 29 August 2025 14:57:08 +0000 (0:00:00.363) 0:05:56.733 ********* 2025-08-29 15:03:01.568623 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.568629 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:01.568635 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.568641 | orchestrator | 2025-08-29 15:03:01.568648 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-08-29 15:03:01.568654 | orchestrator | Friday 29 August 2025 14:57:08 +0000 (0:00:00.577) 0:05:57.311 ********* 2025-08-29 15:03:01.568693 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-08-29 15:03:01.568701 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 15:03:01.568707 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 15:03:01.568713 | orchestrator | 2025-08-29 15:03:01.568719 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-08-29 15:03:01.568725 | orchestrator | Friday 29 August 2025 14:57:09 +0000 (0:00:00.881) 0:05:58.192 ********* 2025-08-29 15:03:01.568731 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:03:01.568738 | orchestrator | 2025-08-29 15:03:01.568744 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-08-29 15:03:01.568750 | orchestrator | Friday 29 August 2025 14:57:10 +0000 (0:00:00.818) 0:05:59.011 ********* 2025-08-29 15:03:01.568756 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:03:01.568762 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:03:01.568768 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:03:01.568774 | orchestrator | 2025-08-29 15:03:01.568855 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-08-29 15:03:01.568878 | orchestrator | Friday 29 August 2025 14:57:11 +0000 (0:00:00.686) 0:05:59.697 ********* 2025-08-29 15:03:01.568884 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.568890 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.568896 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.568903 | orchestrator | 2025-08-29 15:03:01.568909 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-08-29 15:03:01.568915 | orchestrator | Friday 29 August 2025 14:57:11 +0000 (0:00:00.352) 0:06:00.050 ********* 2025-08-29 15:03:01.568921 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 15:03:01.568927 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 15:03:01.568933 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 15:03:01.568940 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-08-29 15:03:01.568951 | orchestrator | 2025-08-29 15:03:01.568957 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-08-29 15:03:01.568963 | orchestrator | Friday 29 August 2025 14:57:22 +0000 (0:00:10.612) 0:06:10.662 ********* 2025-08-29 15:03:01.568969 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.568975 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:01.568981 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.568987 | orchestrator | 2025-08-29 15:03:01.568993 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-08-29 15:03:01.568999 | orchestrator | Friday 29 August 2025 14:57:22 +0000 (0:00:00.645) 0:06:11.308 ********* 2025-08-29 15:03:01.569005 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-08-29 15:03:01.569012 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-08-29 15:03:01.569018 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-08-29 15:03:01.569024 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-08-29 15:03:01.569030 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:03:01.569037 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:03:01.569043 | orchestrator | 2025-08-29 15:03:01.569049 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-08-29 15:03:01.569055 | orchestrator | Friday 29 August 2025 14:57:24 +0000 (0:00:01.925) 0:06:13.233 ********* 2025-08-29 15:03:01.569064 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-08-29 15:03:01.569071 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-08-29 15:03:01.569077 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-08-29 15:03:01.569083 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 15:03:01.569089 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-08-29 15:03:01.569096 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-08-29 15:03:01.569102 | orchestrator | 2025-08-29 15:03:01.569108 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-08-29 15:03:01.569114 | orchestrator | Friday 29 August 2025 14:57:25 +0000 (0:00:01.129) 0:06:14.363 ********* 2025-08-29 15:03:01.569120 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.569126 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:01.569133 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.569139 | orchestrator | 2025-08-29 15:03:01.569145 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-08-29 15:03:01.569151 | orchestrator | Friday 29 August 2025 14:57:26 +0000 (0:00:00.695) 0:06:15.059 ********* 2025-08-29 15:03:01.569157 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.569163 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.569170 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.569176 | orchestrator | 2025-08-29 15:03:01.569182 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-08-29 15:03:01.569188 | orchestrator | Friday 29 August 2025 14:57:27 +0000 (0:00:00.538) 0:06:15.598 ********* 2025-08-29 15:03:01.569194 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.569200 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.569207 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.569213 | orchestrator | 2025-08-29 15:03:01.569219 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-08-29 15:03:01.569225 | orchestrator | Friday 29 August 2025 14:57:27 +0000 (0:00:00.335) 0:06:15.933 ********* 2025-08-29 15:03:01.569231 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:03:01.569238 | orchestrator | 2025-08-29 15:03:01.569244 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-08-29 15:03:01.569250 | orchestrator | Friday 29 August 2025 14:57:28 +0000 (0:00:00.561) 0:06:16.495 ********* 2025-08-29 15:03:01.569256 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.569262 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.569272 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.569279 | orchestrator | 2025-08-29 15:03:01.569309 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-08-29 15:03:01.569316 | orchestrator | Friday 29 August 2025 14:57:28 +0000 (0:00:00.328) 0:06:16.823 ********* 2025-08-29 15:03:01.569322 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.569329 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.569335 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.569341 | orchestrator | 2025-08-29 15:03:01.569347 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-08-29 15:03:01.569353 | orchestrator | Friday 29 August 2025 14:57:28 +0000 (0:00:00.654) 0:06:17.478 ********* 2025-08-29 15:03:01.569360 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-2, testbed-node-1 2025-08-29 15:03:01.569366 | orchestrator | 2025-08-29 15:03:01.569372 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-08-29 15:03:01.569378 | orchestrator | Friday 29 August 2025 14:57:29 +0000 (0:00:00.566) 0:06:18.044 ********* 2025-08-29 15:03:01.569385 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:03:01.569391 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:03:01.569397 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:03:01.569403 | orchestrator | 2025-08-29 15:03:01.569409 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-08-29 15:03:01.569416 | orchestrator | Friday 29 August 2025 14:57:30 +0000 (0:00:01.348) 0:06:19.393 ********* 2025-08-29 15:03:01.569422 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:03:01.569428 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:03:01.569434 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:03:01.569440 | orchestrator | 2025-08-29 15:03:01.569446 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-08-29 15:03:01.569453 | orchestrator | Friday 29 August 2025 14:57:32 +0000 (0:00:01.359) 0:06:20.753 ********* 2025-08-29 15:03:01.569459 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:03:01.569465 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:03:01.569471 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:03:01.569477 | orchestrator | 2025-08-29 15:03:01.569483 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-08-29 15:03:01.569489 | orchestrator | Friday 29 August 2025 14:57:33 +0000 (0:00:01.678) 0:06:22.431 ********* 2025-08-29 15:03:01.569495 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:03:01.569502 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:03:01.569508 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:03:01.569514 | orchestrator | 2025-08-29 15:03:01.569520 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-08-29 15:03:01.569526 | orchestrator | Friday 29 August 2025 14:57:36 +0000 (0:00:02.094) 0:06:24.526 ********* 2025-08-29 15:03:01.569532 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.569539 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.569545 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-08-29 15:03:01.569551 | orchestrator | 2025-08-29 15:03:01.569557 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-08-29 15:03:01.569563 | orchestrator | Friday 29 August 2025 14:57:36 +0000 (0:00:00.394) 0:06:24.921 ********* 2025-08-29 15:03:01.569570 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-08-29 15:03:01.569576 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-08-29 15:03:01.569586 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-08-29 15:03:01.569592 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-08-29 15:03:01.569599 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2025-08-29 15:03:01.569610 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-08-29 15:03:01.569616 | orchestrator | 2025-08-29 15:03:01.569623 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-08-29 15:03:01.569629 | orchestrator | Friday 29 August 2025 14:58:07 +0000 (0:00:30.720) 0:06:55.641 ********* 2025-08-29 15:03:01.569635 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-08-29 15:03:01.569641 | orchestrator | 2025-08-29 15:03:01.569647 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-08-29 15:03:01.569654 | orchestrator | Friday 29 August 2025 14:58:08 +0000 (0:00:01.330) 0:06:56.972 ********* 2025-08-29 15:03:01.569673 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.569679 | orchestrator | 2025-08-29 15:03:01.569687 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-08-29 15:03:01.569698 | orchestrator | Friday 29 August 2025 14:58:08 +0000 (0:00:00.328) 0:06:57.301 ********* 2025-08-29 15:03:01.569708 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.569718 | orchestrator | 2025-08-29 15:03:01.569728 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-08-29 15:03:01.569738 | orchestrator | Friday 29 August 2025 14:58:08 +0000 (0:00:00.165) 0:06:57.466 ********* 2025-08-29 15:03:01.569748 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-08-29 15:03:01.569758 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-08-29 15:03:01.569769 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-08-29 15:03:01.569776 | orchestrator | 2025-08-29 15:03:01.569782 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-08-29 15:03:01.569790 | orchestrator | Friday 29 August 2025 14:58:15 +0000 (0:00:06.529) 0:07:03.995 ********* 2025-08-29 15:03:01.569800 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-08-29 15:03:01.569840 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-08-29 15:03:01.569852 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-08-29 15:03:01.569862 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-08-29 15:03:01.569871 | orchestrator | 2025-08-29 15:03:01.569877 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-08-29 15:03:01.569884 | orchestrator | Friday 29 August 2025 14:58:20 +0000 (0:00:04.752) 0:07:08.748 ********* 2025-08-29 15:03:01.569890 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:03:01.569896 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:03:01.569902 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:03:01.569908 | orchestrator | 2025-08-29 15:03:01.569914 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-08-29 15:03:01.569920 | orchestrator | Friday 29 August 2025 14:58:21 +0000 (0:00:01.062) 0:07:09.811 ********* 2025-08-29 15:03:01.569926 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:03:01.569933 | orchestrator | 2025-08-29 15:03:01.569939 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-08-29 15:03:01.569945 | orchestrator | Friday 29 August 2025 14:58:21 +0000 (0:00:00.550) 0:07:10.361 ********* 2025-08-29 15:03:01.569951 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.569957 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:01.569963 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.569969 | orchestrator | 2025-08-29 15:03:01.569975 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-08-29 15:03:01.569982 | orchestrator | Friday 29 August 2025 14:58:22 +0000 (0:00:00.318) 0:07:10.679 ********* 2025-08-29 15:03:01.569988 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:03:01.569994 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:03:01.570006 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:03:01.570012 | orchestrator | 2025-08-29 15:03:01.570060 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-08-29 15:03:01.570067 | orchestrator | Friday 29 August 2025 14:58:23 +0000 (0:00:01.515) 0:07:12.195 ********* 2025-08-29 15:03:01.570073 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-08-29 15:03:01.570079 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-08-29 15:03:01.570086 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-08-29 15:03:01.570092 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.570098 | orchestrator | 2025-08-29 15:03:01.570105 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-08-29 15:03:01.570111 | orchestrator | Friday 29 August 2025 14:58:24 +0000 (0:00:00.646) 0:07:12.841 ********* 2025-08-29 15:03:01.570117 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.570123 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:01.570130 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.570136 | orchestrator | 2025-08-29 15:03:01.570142 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-08-29 15:03:01.570148 | orchestrator | 2025-08-29 15:03:01.570154 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-08-29 15:03:01.570161 | orchestrator | Friday 29 August 2025 14:58:25 +0000 (0:00:00.703) 0:07:13.545 ********* 2025-08-29 15:03:01.570167 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:03:01.570174 | orchestrator | 2025-08-29 15:03:01.570180 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-08-29 15:03:01.570191 | orchestrator | Friday 29 August 2025 14:58:25 +0000 (0:00:00.754) 0:07:14.299 ********* 2025-08-29 15:03:01.570197 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:03:01.570204 | orchestrator | 2025-08-29 15:03:01.570210 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-08-29 15:03:01.570216 | orchestrator | Friday 29 August 2025 14:58:26 +0000 (0:00:00.562) 0:07:14.862 ********* 2025-08-29 15:03:01.570222 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.570228 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.570234 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.570241 | orchestrator | 2025-08-29 15:03:01.570247 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-08-29 15:03:01.570253 | orchestrator | Friday 29 August 2025 14:58:26 +0000 (0:00:00.298) 0:07:15.161 ********* 2025-08-29 15:03:01.570259 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.570265 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.570272 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.570278 | orchestrator | 2025-08-29 15:03:01.570284 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-08-29 15:03:01.570290 | orchestrator | Friday 29 August 2025 14:58:27 +0000 (0:00:01.024) 0:07:16.185 ********* 2025-08-29 15:03:01.570297 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.570303 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.570309 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.570315 | orchestrator | 2025-08-29 15:03:01.570321 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-08-29 15:03:01.570327 | orchestrator | Friday 29 August 2025 14:58:28 +0000 (0:00:00.794) 0:07:16.980 ********* 2025-08-29 15:03:01.570334 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.570340 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.570346 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.570352 | orchestrator | 2025-08-29 15:03:01.570358 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-08-29 15:03:01.570364 | orchestrator | Friday 29 August 2025 14:58:29 +0000 (0:00:00.741) 0:07:17.721 ********* 2025-08-29 15:03:01.570371 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.570382 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.570389 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.570395 | orchestrator | 2025-08-29 15:03:01.570401 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-08-29 15:03:01.570408 | orchestrator | Friday 29 August 2025 14:58:29 +0000 (0:00:00.330) 0:07:18.051 ********* 2025-08-29 15:03:01.570435 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.570443 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.570449 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.570456 | orchestrator | 2025-08-29 15:03:01.570462 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-08-29 15:03:01.570468 | orchestrator | Friday 29 August 2025 14:58:30 +0000 (0:00:00.566) 0:07:18.617 ********* 2025-08-29 15:03:01.570474 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.570480 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.570487 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.570498 | orchestrator | 2025-08-29 15:03:01.570508 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-08-29 15:03:01.570519 | orchestrator | Friday 29 August 2025 14:58:30 +0000 (0:00:00.346) 0:07:18.964 ********* 2025-08-29 15:03:01.570529 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.570539 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.570551 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.570558 | orchestrator | 2025-08-29 15:03:01.570564 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-08-29 15:03:01.570570 | orchestrator | Friday 29 August 2025 14:58:31 +0000 (0:00:00.703) 0:07:19.668 ********* 2025-08-29 15:03:01.570576 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.570582 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.570588 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.570595 | orchestrator | 2025-08-29 15:03:01.570601 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-08-29 15:03:01.570607 | orchestrator | Friday 29 August 2025 14:58:31 +0000 (0:00:00.759) 0:07:20.427 ********* 2025-08-29 15:03:01.570613 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.570619 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.570625 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.570631 | orchestrator | 2025-08-29 15:03:01.570638 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-08-29 15:03:01.570644 | orchestrator | Friday 29 August 2025 14:58:32 +0000 (0:00:00.556) 0:07:20.984 ********* 2025-08-29 15:03:01.570650 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.570695 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.570702 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.570708 | orchestrator | 2025-08-29 15:03:01.570714 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-08-29 15:03:01.570721 | orchestrator | Friday 29 August 2025 14:58:32 +0000 (0:00:00.318) 0:07:21.303 ********* 2025-08-29 15:03:01.570727 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.570733 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.570739 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.570745 | orchestrator | 2025-08-29 15:03:01.570751 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-08-29 15:03:01.570758 | orchestrator | Friday 29 August 2025 14:58:33 +0000 (0:00:00.343) 0:07:21.646 ********* 2025-08-29 15:03:01.570764 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.570770 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.570776 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.570782 | orchestrator | 2025-08-29 15:03:01.570788 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-08-29 15:03:01.570795 | orchestrator | Friday 29 August 2025 14:58:33 +0000 (0:00:00.329) 0:07:21.976 ********* 2025-08-29 15:03:01.570801 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.570807 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.570813 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.570826 | orchestrator | 2025-08-29 15:03:01.570832 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-08-29 15:03:01.570839 | orchestrator | Friday 29 August 2025 14:58:34 +0000 (0:00:00.701) 0:07:22.677 ********* 2025-08-29 15:03:01.570845 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.570855 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.570861 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.570868 | orchestrator | 2025-08-29 15:03:01.570874 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-08-29 15:03:01.570880 | orchestrator | Friday 29 August 2025 14:58:34 +0000 (0:00:00.317) 0:07:22.994 ********* 2025-08-29 15:03:01.570886 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.570893 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.570899 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.570905 | orchestrator | 2025-08-29 15:03:01.570912 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-08-29 15:03:01.570918 | orchestrator | Friday 29 August 2025 14:58:34 +0000 (0:00:00.315) 0:07:23.310 ********* 2025-08-29 15:03:01.570924 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.570930 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.570936 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.570942 | orchestrator | 2025-08-29 15:03:01.570949 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-08-29 15:03:01.570955 | orchestrator | Friday 29 August 2025 14:58:35 +0000 (0:00:00.306) 0:07:23.617 ********* 2025-08-29 15:03:01.570961 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.570967 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.570973 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.570979 | orchestrator | 2025-08-29 15:03:01.570986 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-08-29 15:03:01.570992 | orchestrator | Friday 29 August 2025 14:58:35 +0000 (0:00:00.742) 0:07:24.359 ********* 2025-08-29 15:03:01.570998 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.571004 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.571010 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.571017 | orchestrator | 2025-08-29 15:03:01.571023 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-08-29 15:03:01.571029 | orchestrator | Friday 29 August 2025 14:58:36 +0000 (0:00:00.539) 0:07:24.899 ********* 2025-08-29 15:03:01.571035 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.571041 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.571047 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.571054 | orchestrator | 2025-08-29 15:03:01.571060 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-08-29 15:03:01.571066 | orchestrator | Friday 29 August 2025 14:58:36 +0000 (0:00:00.327) 0:07:25.227 ********* 2025-08-29 15:03:01.571072 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-08-29 15:03:01.571083 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 15:03:01.571090 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 15:03:01.571096 | orchestrator | 2025-08-29 15:03:01.571102 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-08-29 15:03:01.571109 | orchestrator | Friday 29 August 2025 14:58:37 +0000 (0:00:00.897) 0:07:26.124 ********* 2025-08-29 15:03:01.571115 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:03:01.571121 | orchestrator | 2025-08-29 15:03:01.571127 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-08-29 15:03:01.571133 | orchestrator | Friday 29 August 2025 14:58:38 +0000 (0:00:00.777) 0:07:26.902 ********* 2025-08-29 15:03:01.571139 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.571146 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.571156 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.571162 | orchestrator | 2025-08-29 15:03:01.571169 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-08-29 15:03:01.571175 | orchestrator | Friday 29 August 2025 14:58:38 +0000 (0:00:00.316) 0:07:27.219 ********* 2025-08-29 15:03:01.571181 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.571187 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.571193 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.571199 | orchestrator | 2025-08-29 15:03:01.571206 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-08-29 15:03:01.571212 | orchestrator | Friday 29 August 2025 14:58:39 +0000 (0:00:00.292) 0:07:27.511 ********* 2025-08-29 15:03:01.571218 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.571224 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.571229 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.571235 | orchestrator | 2025-08-29 15:03:01.571240 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-08-29 15:03:01.571246 | orchestrator | Friday 29 August 2025 14:58:39 +0000 (0:00:00.909) 0:07:28.421 ********* 2025-08-29 15:03:01.571251 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.571257 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.571262 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.571267 | orchestrator | 2025-08-29 15:03:01.571273 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-08-29 15:03:01.571278 | orchestrator | Friday 29 August 2025 14:58:40 +0000 (0:00:00.372) 0:07:28.793 ********* 2025-08-29 15:03:01.571284 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-08-29 15:03:01.571289 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-08-29 15:03:01.571295 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-08-29 15:03:01.571300 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-08-29 15:03:01.571306 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-08-29 15:03:01.571311 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-08-29 15:03:01.571317 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-08-29 15:03:01.571326 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-08-29 15:03:01.571332 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-08-29 15:03:01.571337 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-08-29 15:03:01.571343 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-08-29 15:03:01.571348 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-08-29 15:03:01.571353 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-08-29 15:03:01.571359 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-08-29 15:03:01.571364 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-08-29 15:03:01.571369 | orchestrator | 2025-08-29 15:03:01.571375 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-08-29 15:03:01.571380 | orchestrator | Friday 29 August 2025 14:58:42 +0000 (0:00:02.238) 0:07:31.032 ********* 2025-08-29 15:03:01.571386 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.571391 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.571396 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.571402 | orchestrator | 2025-08-29 15:03:01.571407 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-08-29 15:03:01.571413 | orchestrator | Friday 29 August 2025 14:58:42 +0000 (0:00:00.317) 0:07:31.350 ********* 2025-08-29 15:03:01.571423 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:03:01.571428 | orchestrator | 2025-08-29 15:03:01.571434 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-08-29 15:03:01.571439 | orchestrator | Friday 29 August 2025 14:58:43 +0000 (0:00:00.950) 0:07:32.300 ********* 2025-08-29 15:03:01.571444 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-08-29 15:03:01.571450 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-08-29 15:03:01.571455 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-08-29 15:03:01.571468 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-08-29 15:03:01.571474 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-08-29 15:03:01.571479 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-08-29 15:03:01.571485 | orchestrator | 2025-08-29 15:03:01.571490 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-08-29 15:03:01.571495 | orchestrator | Friday 29 August 2025 14:58:44 +0000 (0:00:00.991) 0:07:33.292 ********* 2025-08-29 15:03:01.571501 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:03:01.571506 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-08-29 15:03:01.571512 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-08-29 15:03:01.571517 | orchestrator | 2025-08-29 15:03:01.571522 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-08-29 15:03:01.571528 | orchestrator | Friday 29 August 2025 14:58:47 +0000 (0:00:02.203) 0:07:35.495 ********* 2025-08-29 15:03:01.571533 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-08-29 15:03:01.571539 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-08-29 15:03:01.571544 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:03:01.571550 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-08-29 15:03:01.571555 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-08-29 15:03:01.571561 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:03:01.571566 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-08-29 15:03:01.571571 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-08-29 15:03:01.571577 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:03:01.571582 | orchestrator | 2025-08-29 15:03:01.571587 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-08-29 15:03:01.571593 | orchestrator | Friday 29 August 2025 14:58:48 +0000 (0:00:01.442) 0:07:36.938 ********* 2025-08-29 15:03:01.571598 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-08-29 15:03:01.571604 | orchestrator | 2025-08-29 15:03:01.571609 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-08-29 15:03:01.571614 | orchestrator | Friday 29 August 2025 14:58:50 +0000 (0:00:02.141) 0:07:39.079 ********* 2025-08-29 15:03:01.571620 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:03:01.571625 | orchestrator | 2025-08-29 15:03:01.571630 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-08-29 15:03:01.571636 | orchestrator | Friday 29 August 2025 14:58:51 +0000 (0:00:00.547) 0:07:39.627 ********* 2025-08-29 15:03:01.571641 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-cd5b7d9a-1dd4-5184-a319-6c247fab2039', 'data_vg': 'ceph-cd5b7d9a-1dd4-5184-a319-6c247fab2039'}) 2025-08-29 15:03:01.571649 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-4c2f47a1-6693-5b64-9c97-de0e0041f7f6', 'data_vg': 'ceph-4c2f47a1-6693-5b64-9c97-de0e0041f7f6'}) 2025-08-29 15:03:01.571665 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-ea955146-254c-5a5a-83ec-c4f4ca16d6a1', 'data_vg': 'ceph-ea955146-254c-5a5a-83ec-c4f4ca16d6a1'}) 2025-08-29 15:03:01.571677 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-218f7b56-b785-5eaf-b35f-b0ddc87960c6', 'data_vg': 'ceph-218f7b56-b785-5eaf-b35f-b0ddc87960c6'}) 2025-08-29 15:03:01.571686 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-aeb09036-0b6a-534a-a94a-678fcf7bc5df', 'data_vg': 'ceph-aeb09036-0b6a-534a-a94a-678fcf7bc5df'}) 2025-08-29 15:03:01.571691 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-95dc25c6-61fb-51c1-a723-34c7e57ec220', 'data_vg': 'ceph-95dc25c6-61fb-51c1-a723-34c7e57ec220'}) 2025-08-29 15:03:01.571697 | orchestrator | 2025-08-29 15:03:01.571702 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-08-29 15:03:01.571708 | orchestrator | Friday 29 August 2025 14:59:34 +0000 (0:00:43.428) 0:08:23.055 ********* 2025-08-29 15:03:01.571713 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.571718 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.571724 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.571729 | orchestrator | 2025-08-29 15:03:01.571735 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-08-29 15:03:01.571740 | orchestrator | Friday 29 August 2025 14:59:35 +0000 (0:00:00.608) 0:08:23.664 ********* 2025-08-29 15:03:01.571746 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:03:01.571751 | orchestrator | 2025-08-29 15:03:01.571756 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-08-29 15:03:01.571762 | orchestrator | Friday 29 August 2025 14:59:35 +0000 (0:00:00.568) 0:08:24.233 ********* 2025-08-29 15:03:01.571767 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.571773 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.571778 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.571784 | orchestrator | 2025-08-29 15:03:01.571789 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-08-29 15:03:01.571795 | orchestrator | Friday 29 August 2025 14:59:36 +0000 (0:00:00.662) 0:08:24.896 ********* 2025-08-29 15:03:01.571800 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.571805 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.571811 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.571816 | orchestrator | 2025-08-29 15:03:01.571822 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-08-29 15:03:01.571827 | orchestrator | Friday 29 August 2025 14:59:39 +0000 (0:00:02.822) 0:08:27.719 ********* 2025-08-29 15:03:01.571832 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:03:01.571838 | orchestrator | 2025-08-29 15:03:01.571847 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-08-29 15:03:01.571853 | orchestrator | Friday 29 August 2025 14:59:39 +0000 (0:00:00.552) 0:08:28.271 ********* 2025-08-29 15:03:01.571858 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:03:01.571863 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:03:01.571869 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:03:01.571874 | orchestrator | 2025-08-29 15:03:01.571879 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-08-29 15:03:01.571885 | orchestrator | Friday 29 August 2025 14:59:40 +0000 (0:00:01.188) 0:08:29.459 ********* 2025-08-29 15:03:01.571890 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:03:01.571895 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:03:01.571901 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:03:01.571906 | orchestrator | 2025-08-29 15:03:01.571912 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-08-29 15:03:01.571917 | orchestrator | Friday 29 August 2025 14:59:42 +0000 (0:00:01.457) 0:08:30.916 ********* 2025-08-29 15:03:01.571923 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:03:01.571928 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:03:01.571933 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:03:01.571939 | orchestrator | 2025-08-29 15:03:01.571944 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-08-29 15:03:01.571954 | orchestrator | Friday 29 August 2025 14:59:44 +0000 (0:00:01.710) 0:08:32.627 ********* 2025-08-29 15:03:01.571959 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.571964 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.571970 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.571975 | orchestrator | 2025-08-29 15:03:01.571981 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-08-29 15:03:01.571986 | orchestrator | Friday 29 August 2025 14:59:44 +0000 (0:00:00.424) 0:08:33.052 ********* 2025-08-29 15:03:01.571991 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.571997 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.572002 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.572008 | orchestrator | 2025-08-29 15:03:01.572013 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-08-29 15:03:01.572018 | orchestrator | Friday 29 August 2025 14:59:44 +0000 (0:00:00.425) 0:08:33.477 ********* 2025-08-29 15:03:01.572024 | orchestrator | ok: [testbed-node-3] => (item=5) 2025-08-29 15:03:01.572029 | orchestrator | ok: [testbed-node-4] => (item=3) 2025-08-29 15:03:01.572035 | orchestrator | ok: [testbed-node-5] => (item=4) 2025-08-29 15:03:01.572040 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-08-29 15:03:01.572045 | orchestrator | ok: [testbed-node-3] => (item=1) 2025-08-29 15:03:01.572051 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-08-29 15:03:01.572056 | orchestrator | 2025-08-29 15:03:01.572062 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-08-29 15:03:01.572067 | orchestrator | Friday 29 August 2025 14:59:46 +0000 (0:00:01.611) 0:08:35.089 ********* 2025-08-29 15:03:01.572072 | orchestrator | changed: [testbed-node-3] => (item=5) 2025-08-29 15:03:01.572078 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-08-29 15:03:01.572083 | orchestrator | changed: [testbed-node-5] => (item=4) 2025-08-29 15:03:01.572089 | orchestrator | changed: [testbed-node-4] => (item=0) 2025-08-29 15:03:01.572094 | orchestrator | changed: [testbed-node-3] => (item=1) 2025-08-29 15:03:01.572099 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-08-29 15:03:01.572105 | orchestrator | 2025-08-29 15:03:01.572110 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-08-29 15:03:01.572116 | orchestrator | Friday 29 August 2025 14:59:48 +0000 (0:00:02.259) 0:08:37.348 ********* 2025-08-29 15:03:01.572124 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-08-29 15:03:01.572130 | orchestrator | changed: [testbed-node-3] => (item=5) 2025-08-29 15:03:01.572135 | orchestrator | changed: [testbed-node-5] => (item=4) 2025-08-29 15:03:01.572140 | orchestrator | changed: [testbed-node-4] => (item=0) 2025-08-29 15:03:01.572146 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-08-29 15:03:01.572151 | orchestrator | changed: [testbed-node-3] => (item=1) 2025-08-29 15:03:01.572156 | orchestrator | 2025-08-29 15:03:01.572162 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-08-29 15:03:01.572167 | orchestrator | Friday 29 August 2025 14:59:52 +0000 (0:00:03.670) 0:08:41.019 ********* 2025-08-29 15:03:01.572173 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.572178 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.572183 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-08-29 15:03:01.572189 | orchestrator | 2025-08-29 15:03:01.572194 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-08-29 15:03:01.572200 | orchestrator | Friday 29 August 2025 14:59:55 +0000 (0:00:03.228) 0:08:44.248 ********* 2025-08-29 15:03:01.572205 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.572211 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.572216 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-08-29 15:03:01.572222 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-08-29 15:03:01.572227 | orchestrator | 2025-08-29 15:03:01.572236 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-08-29 15:03:01.572241 | orchestrator | Friday 29 August 2025 15:00:08 +0000 (0:00:13.141) 0:08:57.390 ********* 2025-08-29 15:03:01.572247 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.572252 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.572258 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.572263 | orchestrator | 2025-08-29 15:03:01.572268 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-08-29 15:03:01.572274 | orchestrator | Friday 29 August 2025 15:00:09 +0000 (0:00:00.802) 0:08:58.192 ********* 2025-08-29 15:03:01.572279 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.572285 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.572290 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.572295 | orchestrator | 2025-08-29 15:03:01.572301 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-08-29 15:03:01.572310 | orchestrator | Friday 29 August 2025 15:00:10 +0000 (0:00:00.622) 0:08:58.814 ********* 2025-08-29 15:03:01.572315 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:03:01.572321 | orchestrator | 2025-08-29 15:03:01.572326 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-08-29 15:03:01.572331 | orchestrator | Friday 29 August 2025 15:00:10 +0000 (0:00:00.592) 0:08:59.407 ********* 2025-08-29 15:03:01.572337 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 15:03:01.572342 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 15:03:01.572348 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 15:03:01.572353 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.572359 | orchestrator | 2025-08-29 15:03:01.572364 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-08-29 15:03:01.572370 | orchestrator | Friday 29 August 2025 15:00:11 +0000 (0:00:00.404) 0:08:59.811 ********* 2025-08-29 15:03:01.572375 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.572380 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.572386 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.572391 | orchestrator | 2025-08-29 15:03:01.572397 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-08-29 15:03:01.572402 | orchestrator | Friday 29 August 2025 15:00:11 +0000 (0:00:00.298) 0:09:00.109 ********* 2025-08-29 15:03:01.572408 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.572413 | orchestrator | 2025-08-29 15:03:01.572418 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-08-29 15:03:01.572424 | orchestrator | Friday 29 August 2025 15:00:11 +0000 (0:00:00.266) 0:09:00.376 ********* 2025-08-29 15:03:01.572429 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.572435 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.572440 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.572445 | orchestrator | 2025-08-29 15:03:01.572451 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-08-29 15:03:01.572456 | orchestrator | Friday 29 August 2025 15:00:12 +0000 (0:00:00.592) 0:09:00.969 ********* 2025-08-29 15:03:01.572462 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.572467 | orchestrator | 2025-08-29 15:03:01.572473 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-08-29 15:03:01.572478 | orchestrator | Friday 29 August 2025 15:00:12 +0000 (0:00:00.226) 0:09:01.195 ********* 2025-08-29 15:03:01.572484 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.572489 | orchestrator | 2025-08-29 15:03:01.572494 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-08-29 15:03:01.572500 | orchestrator | Friday 29 August 2025 15:00:12 +0000 (0:00:00.255) 0:09:01.450 ********* 2025-08-29 15:03:01.572505 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.572511 | orchestrator | 2025-08-29 15:03:01.572516 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-08-29 15:03:01.572525 | orchestrator | Friday 29 August 2025 15:00:13 +0000 (0:00:00.131) 0:09:01.582 ********* 2025-08-29 15:03:01.572531 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.572536 | orchestrator | 2025-08-29 15:03:01.572541 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-08-29 15:03:01.572547 | orchestrator | Friday 29 August 2025 15:00:13 +0000 (0:00:00.234) 0:09:01.816 ********* 2025-08-29 15:03:01.572552 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.572557 | orchestrator | 2025-08-29 15:03:01.572563 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-08-29 15:03:01.572572 | orchestrator | Friday 29 August 2025 15:00:13 +0000 (0:00:00.226) 0:09:02.044 ********* 2025-08-29 15:03:01.572578 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 15:03:01.572583 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 15:03:01.572589 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 15:03:01.572594 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.572599 | orchestrator | 2025-08-29 15:03:01.572605 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-08-29 15:03:01.572610 | orchestrator | Friday 29 August 2025 15:00:13 +0000 (0:00:00.441) 0:09:02.486 ********* 2025-08-29 15:03:01.572616 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.572621 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.572626 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.572632 | orchestrator | 2025-08-29 15:03:01.572638 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-08-29 15:03:01.572643 | orchestrator | Friday 29 August 2025 15:00:14 +0000 (0:00:00.317) 0:09:02.803 ********* 2025-08-29 15:03:01.572648 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.572670 | orchestrator | 2025-08-29 15:03:01.572676 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-08-29 15:03:01.572681 | orchestrator | Friday 29 August 2025 15:00:15 +0000 (0:00:00.890) 0:09:03.694 ********* 2025-08-29 15:03:01.572686 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.572692 | orchestrator | 2025-08-29 15:03:01.572697 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-08-29 15:03:01.572703 | orchestrator | 2025-08-29 15:03:01.572708 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-08-29 15:03:01.572713 | orchestrator | Friday 29 August 2025 15:00:15 +0000 (0:00:00.763) 0:09:04.457 ********* 2025-08-29 15:03:01.572719 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:03:01.572725 | orchestrator | 2025-08-29 15:03:01.572731 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-08-29 15:03:01.572736 | orchestrator | Friday 29 August 2025 15:00:17 +0000 (0:00:01.240) 0:09:05.698 ********* 2025-08-29 15:03:01.572745 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:03:01.572751 | orchestrator | 2025-08-29 15:03:01.572757 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-08-29 15:03:01.572762 | orchestrator | Friday 29 August 2025 15:00:18 +0000 (0:00:01.255) 0:09:06.953 ********* 2025-08-29 15:03:01.572767 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.572773 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.572778 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.572784 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.572789 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:01.572795 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.572800 | orchestrator | 2025-08-29 15:03:01.572805 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-08-29 15:03:01.572811 | orchestrator | Friday 29 August 2025 15:00:19 +0000 (0:00:01.225) 0:09:08.178 ********* 2025-08-29 15:03:01.572819 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.572825 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.572830 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.572835 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.572841 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.572846 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.572852 | orchestrator | 2025-08-29 15:03:01.572857 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-08-29 15:03:01.572863 | orchestrator | Friday 29 August 2025 15:00:20 +0000 (0:00:00.748) 0:09:08.927 ********* 2025-08-29 15:03:01.572868 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.572874 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.572879 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.572884 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.572890 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.572895 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.572901 | orchestrator | 2025-08-29 15:03:01.572906 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-08-29 15:03:01.572911 | orchestrator | Friday 29 August 2025 15:00:21 +0000 (0:00:01.027) 0:09:09.955 ********* 2025-08-29 15:03:01.572917 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.572922 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.572928 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.572933 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.572938 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.572944 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.572949 | orchestrator | 2025-08-29 15:03:01.572955 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-08-29 15:03:01.572960 | orchestrator | Friday 29 August 2025 15:00:22 +0000 (0:00:00.719) 0:09:10.675 ********* 2025-08-29 15:03:01.572966 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.572971 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.572976 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.572982 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.572987 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:01.572993 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.572998 | orchestrator | 2025-08-29 15:03:01.573004 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-08-29 15:03:01.573009 | orchestrator | Friday 29 August 2025 15:00:23 +0000 (0:00:01.044) 0:09:11.720 ********* 2025-08-29 15:03:01.573014 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.573020 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.573025 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.573031 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.573036 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.573041 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.573047 | orchestrator | 2025-08-29 15:03:01.573052 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-08-29 15:03:01.573061 | orchestrator | Friday 29 August 2025 15:00:24 +0000 (0:00:00.949) 0:09:12.669 ********* 2025-08-29 15:03:01.573066 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.573072 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.573077 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.573083 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.573088 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.573093 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.573099 | orchestrator | 2025-08-29 15:03:01.573104 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-08-29 15:03:01.573110 | orchestrator | Friday 29 August 2025 15:00:24 +0000 (0:00:00.617) 0:09:13.287 ********* 2025-08-29 15:03:01.573115 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.573121 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.573126 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.573135 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.573140 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:01.573146 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.573151 | orchestrator | 2025-08-29 15:03:01.573157 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-08-29 15:03:01.573162 | orchestrator | Friday 29 August 2025 15:00:26 +0000 (0:00:01.615) 0:09:14.902 ********* 2025-08-29 15:03:01.573167 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.573173 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.573178 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.573183 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.573189 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:01.573194 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.573199 | orchestrator | 2025-08-29 15:03:01.573205 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-08-29 15:03:01.573210 | orchestrator | Friday 29 August 2025 15:00:27 +0000 (0:00:01.042) 0:09:15.945 ********* 2025-08-29 15:03:01.573216 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.573221 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.573226 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.573232 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.573237 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.573242 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.573248 | orchestrator | 2025-08-29 15:03:01.573253 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-08-29 15:03:01.573259 | orchestrator | Friday 29 August 2025 15:00:28 +0000 (0:00:00.852) 0:09:16.797 ********* 2025-08-29 15:03:01.573264 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.573270 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.573278 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.573284 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.573289 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:01.573294 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.573300 | orchestrator | 2025-08-29 15:03:01.573305 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-08-29 15:03:01.573311 | orchestrator | Friday 29 August 2025 15:00:28 +0000 (0:00:00.618) 0:09:17.415 ********* 2025-08-29 15:03:01.573316 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.573322 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.573327 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.573333 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.573338 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.573343 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.573349 | orchestrator | 2025-08-29 15:03:01.573354 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-08-29 15:03:01.573360 | orchestrator | Friday 29 August 2025 15:00:29 +0000 (0:00:00.831) 0:09:18.247 ********* 2025-08-29 15:03:01.573365 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.573370 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.573376 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.573381 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.573387 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.573392 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.573397 | orchestrator | 2025-08-29 15:03:01.573403 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-08-29 15:03:01.573408 | orchestrator | Friday 29 August 2025 15:00:30 +0000 (0:00:00.615) 0:09:18.862 ********* 2025-08-29 15:03:01.573414 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.573419 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.573424 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.573430 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.573435 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.573440 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.573446 | orchestrator | 2025-08-29 15:03:01.573451 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-08-29 15:03:01.573460 | orchestrator | Friday 29 August 2025 15:00:31 +0000 (0:00:00.825) 0:09:19.687 ********* 2025-08-29 15:03:01.573466 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.573471 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.573477 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.573482 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.573487 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.573493 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.573498 | orchestrator | 2025-08-29 15:03:01.573512 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-08-29 15:03:01.573518 | orchestrator | Friday 29 August 2025 15:00:31 +0000 (0:00:00.578) 0:09:20.266 ********* 2025-08-29 15:03:01.573524 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.573529 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.573534 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.573540 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:01.573545 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:01.573550 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:01.573556 | orchestrator | 2025-08-29 15:03:01.573561 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-08-29 15:03:01.573567 | orchestrator | Friday 29 August 2025 15:00:32 +0000 (0:00:00.824) 0:09:21.090 ********* 2025-08-29 15:03:01.573572 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.573578 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.573583 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.573589 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.573594 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:01.573599 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.573605 | orchestrator | 2025-08-29 15:03:01.573613 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-08-29 15:03:01.573619 | orchestrator | Friday 29 August 2025 15:00:33 +0000 (0:00:00.610) 0:09:21.701 ********* 2025-08-29 15:03:01.573624 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.573630 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.573635 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.573640 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.573646 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:01.573651 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.573671 | orchestrator | 2025-08-29 15:03:01.573677 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-08-29 15:03:01.573683 | orchestrator | Friday 29 August 2025 15:00:34 +0000 (0:00:00.864) 0:09:22.566 ********* 2025-08-29 15:03:01.573688 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.573693 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.573699 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.573704 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.573709 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:01.573714 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.573720 | orchestrator | 2025-08-29 15:03:01.573725 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-08-29 15:03:01.573731 | orchestrator | Friday 29 August 2025 15:00:35 +0000 (0:00:01.251) 0:09:23.818 ********* 2025-08-29 15:03:01.573736 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-08-29 15:03:01.573742 | orchestrator | 2025-08-29 15:03:01.573747 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-08-29 15:03:01.573753 | orchestrator | Friday 29 August 2025 15:00:39 +0000 (0:00:03.979) 0:09:27.797 ********* 2025-08-29 15:03:01.573758 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-08-29 15:03:01.573763 | orchestrator | 2025-08-29 15:03:01.573769 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-08-29 15:03:01.573774 | orchestrator | Friday 29 August 2025 15:00:41 +0000 (0:00:02.026) 0:09:29.823 ********* 2025-08-29 15:03:01.573780 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:03:01.573789 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:03:01.573795 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:03:01.573800 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.573805 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:03:01.573811 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:03:01.573816 | orchestrator | 2025-08-29 15:03:01.573822 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-08-29 15:03:01.573827 | orchestrator | Friday 29 August 2025 15:00:42 +0000 (0:00:01.512) 0:09:31.336 ********* 2025-08-29 15:03:01.573836 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:03:01.573842 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:03:01.573847 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:03:01.573852 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:03:01.573858 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:03:01.573863 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:03:01.573868 | orchestrator | 2025-08-29 15:03:01.573874 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-08-29 15:03:01.573879 | orchestrator | Friday 29 August 2025 15:00:44 +0000 (0:00:01.242) 0:09:32.579 ********* 2025-08-29 15:03:01.573885 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:03:01.573892 | orchestrator | 2025-08-29 15:03:01.573897 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-08-29 15:03:01.573903 | orchestrator | Friday 29 August 2025 15:00:45 +0000 (0:00:01.283) 0:09:33.862 ********* 2025-08-29 15:03:01.573908 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:03:01.573914 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:03:01.573919 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:03:01.573924 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:03:01.573930 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:03:01.573935 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:03:01.573941 | orchestrator | 2025-08-29 15:03:01.573946 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-08-29 15:03:01.573952 | orchestrator | Friday 29 August 2025 15:00:47 +0000 (0:00:01.662) 0:09:35.525 ********* 2025-08-29 15:03:01.573957 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:03:01.573963 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:03:01.573968 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:03:01.573973 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:03:01.573979 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:03:01.573984 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:03:01.573989 | orchestrator | 2025-08-29 15:03:01.573995 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-08-29 15:03:01.574000 | orchestrator | Friday 29 August 2025 15:00:50 +0000 (0:00:03.697) 0:09:39.223 ********* 2025-08-29 15:03:01.574006 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:03:01.574011 | orchestrator | 2025-08-29 15:03:01.574040 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-08-29 15:03:01.574046 | orchestrator | Friday 29 August 2025 15:00:52 +0000 (0:00:01.318) 0:09:40.541 ********* 2025-08-29 15:03:01.574052 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.574057 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.574062 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.574068 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.574073 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:01.574079 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.574084 | orchestrator | 2025-08-29 15:03:01.574089 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-08-29 15:03:01.574095 | orchestrator | Friday 29 August 2025 15:00:52 +0000 (0:00:00.636) 0:09:41.178 ********* 2025-08-29 15:03:01.574100 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:03:01.574110 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:03:01.574115 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:03:01.574120 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:03:01.574126 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:03:01.574131 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:03:01.574137 | orchestrator | 2025-08-29 15:03:01.574146 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-08-29 15:03:01.574152 | orchestrator | Friday 29 August 2025 15:00:55 +0000 (0:00:02.479) 0:09:43.658 ********* 2025-08-29 15:03:01.574157 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.574163 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.574168 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.574174 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:01.574179 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:01.574184 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:01.574190 | orchestrator | 2025-08-29 15:03:01.574195 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-08-29 15:03:01.574200 | orchestrator | 2025-08-29 15:03:01.574206 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-08-29 15:03:01.574211 | orchestrator | Friday 29 August 2025 15:00:56 +0000 (0:00:00.895) 0:09:44.554 ********* 2025-08-29 15:03:01.574217 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:03:01.574222 | orchestrator | 2025-08-29 15:03:01.574228 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-08-29 15:03:01.574233 | orchestrator | Friday 29 August 2025 15:00:56 +0000 (0:00:00.814) 0:09:45.368 ********* 2025-08-29 15:03:01.574239 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:03:01.574244 | orchestrator | 2025-08-29 15:03:01.574250 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-08-29 15:03:01.574255 | orchestrator | Friday 29 August 2025 15:00:57 +0000 (0:00:00.550) 0:09:45.919 ********* 2025-08-29 15:03:01.574260 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.574266 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.574271 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.574277 | orchestrator | 2025-08-29 15:03:01.574282 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-08-29 15:03:01.574287 | orchestrator | Friday 29 August 2025 15:00:58 +0000 (0:00:00.598) 0:09:46.517 ********* 2025-08-29 15:03:01.574293 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.574298 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.574304 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.574309 | orchestrator | 2025-08-29 15:03:01.574315 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-08-29 15:03:01.574324 | orchestrator | Friday 29 August 2025 15:00:58 +0000 (0:00:00.720) 0:09:47.237 ********* 2025-08-29 15:03:01.574329 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.574335 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.574340 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.574346 | orchestrator | 2025-08-29 15:03:01.574351 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-08-29 15:03:01.574357 | orchestrator | Friday 29 August 2025 15:00:59 +0000 (0:00:00.685) 0:09:47.923 ********* 2025-08-29 15:03:01.574362 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.574367 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.574373 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.574378 | orchestrator | 2025-08-29 15:03:01.574384 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-08-29 15:03:01.574389 | orchestrator | Friday 29 August 2025 15:01:00 +0000 (0:00:00.764) 0:09:48.687 ********* 2025-08-29 15:03:01.574395 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.574400 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.574406 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.574417 | orchestrator | 2025-08-29 15:03:01.574422 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-08-29 15:03:01.574428 | orchestrator | Friday 29 August 2025 15:01:00 +0000 (0:00:00.479) 0:09:49.167 ********* 2025-08-29 15:03:01.574433 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.574439 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.574444 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.574449 | orchestrator | 2025-08-29 15:03:01.574455 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-08-29 15:03:01.574460 | orchestrator | Friday 29 August 2025 15:01:00 +0000 (0:00:00.306) 0:09:49.473 ********* 2025-08-29 15:03:01.574466 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.574471 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.574477 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.574482 | orchestrator | 2025-08-29 15:03:01.574487 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-08-29 15:03:01.574493 | orchestrator | Friday 29 August 2025 15:01:01 +0000 (0:00:00.266) 0:09:49.739 ********* 2025-08-29 15:03:01.574498 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.574504 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.574509 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.574515 | orchestrator | 2025-08-29 15:03:01.574520 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-08-29 15:03:01.574526 | orchestrator | Friday 29 August 2025 15:01:02 +0000 (0:00:00.843) 0:09:50.583 ********* 2025-08-29 15:03:01.574531 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.574536 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.574542 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.574547 | orchestrator | 2025-08-29 15:03:01.574553 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-08-29 15:03:01.574558 | orchestrator | Friday 29 August 2025 15:01:03 +0000 (0:00:01.135) 0:09:51.718 ********* 2025-08-29 15:03:01.574564 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.574569 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.574574 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.574580 | orchestrator | 2025-08-29 15:03:01.574585 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-08-29 15:03:01.574591 | orchestrator | Friday 29 August 2025 15:01:03 +0000 (0:00:00.295) 0:09:52.014 ********* 2025-08-29 15:03:01.574596 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.574602 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.574607 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.574612 | orchestrator | 2025-08-29 15:03:01.574618 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-08-29 15:03:01.574638 | orchestrator | Friday 29 August 2025 15:01:03 +0000 (0:00:00.331) 0:09:52.346 ********* 2025-08-29 15:03:01.574643 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.574649 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.574685 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.574693 | orchestrator | 2025-08-29 15:03:01.574699 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-08-29 15:03:01.574704 | orchestrator | Friday 29 August 2025 15:01:04 +0000 (0:00:00.318) 0:09:52.664 ********* 2025-08-29 15:03:01.574709 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.574715 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.574720 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.574726 | orchestrator | 2025-08-29 15:03:01.574731 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-08-29 15:03:01.574736 | orchestrator | Friday 29 August 2025 15:01:04 +0000 (0:00:00.653) 0:09:53.318 ********* 2025-08-29 15:03:01.574742 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.574747 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.574752 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.574758 | orchestrator | 2025-08-29 15:03:01.574763 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-08-29 15:03:01.574774 | orchestrator | Friday 29 August 2025 15:01:05 +0000 (0:00:00.473) 0:09:53.792 ********* 2025-08-29 15:03:01.574779 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.574785 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.574790 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.574796 | orchestrator | 2025-08-29 15:03:01.574801 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-08-29 15:03:01.574806 | orchestrator | Friday 29 August 2025 15:01:05 +0000 (0:00:00.345) 0:09:54.138 ********* 2025-08-29 15:03:01.574812 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.574818 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.574823 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.574828 | orchestrator | 2025-08-29 15:03:01.574834 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-08-29 15:03:01.574839 | orchestrator | Friday 29 August 2025 15:01:06 +0000 (0:00:00.352) 0:09:54.490 ********* 2025-08-29 15:03:01.574845 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.574850 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.574855 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.574861 | orchestrator | 2025-08-29 15:03:01.574866 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-08-29 15:03:01.574872 | orchestrator | Friday 29 August 2025 15:01:06 +0000 (0:00:00.601) 0:09:55.092 ********* 2025-08-29 15:03:01.574877 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.574886 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.574892 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.574897 | orchestrator | 2025-08-29 15:03:01.574903 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-08-29 15:03:01.574908 | orchestrator | Friday 29 August 2025 15:01:06 +0000 (0:00:00.375) 0:09:55.467 ********* 2025-08-29 15:03:01.574913 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.574919 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.574924 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.574928 | orchestrator | 2025-08-29 15:03:01.574933 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-08-29 15:03:01.574938 | orchestrator | Friday 29 August 2025 15:01:07 +0000 (0:00:00.575) 0:09:56.043 ********* 2025-08-29 15:03:01.574943 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.574948 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.574952 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-08-29 15:03:01.574957 | orchestrator | 2025-08-29 15:03:01.574962 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-08-29 15:03:01.574967 | orchestrator | Friday 29 August 2025 15:01:08 +0000 (0:00:00.734) 0:09:56.777 ********* 2025-08-29 15:03:01.574971 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-08-29 15:03:01.574976 | orchestrator | 2025-08-29 15:03:01.574981 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-08-29 15:03:01.574986 | orchestrator | Friday 29 August 2025 15:01:10 +0000 (0:00:02.193) 0:09:58.970 ********* 2025-08-29 15:03:01.574993 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-08-29 15:03:01.575000 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.575005 | orchestrator | 2025-08-29 15:03:01.575010 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-08-29 15:03:01.575015 | orchestrator | Friday 29 August 2025 15:01:10 +0000 (0:00:00.300) 0:09:59.271 ********* 2025-08-29 15:03:01.575022 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-08-29 15:03:01.575038 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-08-29 15:03:01.575043 | orchestrator | 2025-08-29 15:03:01.575048 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-08-29 15:03:01.575053 | orchestrator | Friday 29 August 2025 15:01:18 +0000 (0:00:07.505) 0:10:06.777 ********* 2025-08-29 15:03:01.575058 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-08-29 15:03:01.575063 | orchestrator | 2025-08-29 15:03:01.575068 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-08-29 15:03:01.575072 | orchestrator | Friday 29 August 2025 15:01:21 +0000 (0:00:03.658) 0:10:10.435 ********* 2025-08-29 15:03:01.575081 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:03:01.575086 | orchestrator | 2025-08-29 15:03:01.575090 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-08-29 15:03:01.575095 | orchestrator | Friday 29 August 2025 15:01:23 +0000 (0:00:01.161) 0:10:11.597 ********* 2025-08-29 15:03:01.575100 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-08-29 15:03:01.575105 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-08-29 15:03:01.575110 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-08-29 15:03:01.575115 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-08-29 15:03:01.575119 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-08-29 15:03:01.575124 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-08-29 15:03:01.575129 | orchestrator | 2025-08-29 15:03:01.575134 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-08-29 15:03:01.575139 | orchestrator | Friday 29 August 2025 15:01:24 +0000 (0:00:01.233) 0:10:12.831 ********* 2025-08-29 15:03:01.575143 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:03:01.575148 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-08-29 15:03:01.575153 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-08-29 15:03:01.575158 | orchestrator | 2025-08-29 15:03:01.575163 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-08-29 15:03:01.575168 | orchestrator | Friday 29 August 2025 15:01:26 +0000 (0:00:02.155) 0:10:14.986 ********* 2025-08-29 15:03:01.575172 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-08-29 15:03:01.575177 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-08-29 15:03:01.575182 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:03:01.575187 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-08-29 15:03:01.575192 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-08-29 15:03:01.575196 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:03:01.575201 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-08-29 15:03:01.575209 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-08-29 15:03:01.575214 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:03:01.575219 | orchestrator | 2025-08-29 15:03:01.575223 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-08-29 15:03:01.575228 | orchestrator | Friday 29 August 2025 15:01:27 +0000 (0:00:01.481) 0:10:16.467 ********* 2025-08-29 15:03:01.575233 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:03:01.575238 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:03:01.575243 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:03:01.575247 | orchestrator | 2025-08-29 15:03:01.575252 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-08-29 15:03:01.575257 | orchestrator | Friday 29 August 2025 15:01:30 +0000 (0:00:02.938) 0:10:19.405 ********* 2025-08-29 15:03:01.575266 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.575271 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.575276 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.575281 | orchestrator | 2025-08-29 15:03:01.575285 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-08-29 15:03:01.575290 | orchestrator | Friday 29 August 2025 15:01:31 +0000 (0:00:00.889) 0:10:20.295 ********* 2025-08-29 15:03:01.575295 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:03:01.575300 | orchestrator | 2025-08-29 15:03:01.575305 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-08-29 15:03:01.575310 | orchestrator | Friday 29 August 2025 15:01:32 +0000 (0:00:00.817) 0:10:21.112 ********* 2025-08-29 15:03:01.575315 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:03:01.575319 | orchestrator | 2025-08-29 15:03:01.575324 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-08-29 15:03:01.575329 | orchestrator | Friday 29 August 2025 15:01:33 +0000 (0:00:01.171) 0:10:22.283 ********* 2025-08-29 15:03:01.575334 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:03:01.575339 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:03:01.575343 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:03:01.575348 | orchestrator | 2025-08-29 15:03:01.575353 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-08-29 15:03:01.575358 | orchestrator | Friday 29 August 2025 15:01:35 +0000 (0:00:01.458) 0:10:23.742 ********* 2025-08-29 15:03:01.575363 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:03:01.575367 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:03:01.575372 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:03:01.575377 | orchestrator | 2025-08-29 15:03:01.575382 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-08-29 15:03:01.575387 | orchestrator | Friday 29 August 2025 15:01:36 +0000 (0:00:01.183) 0:10:24.926 ********* 2025-08-29 15:03:01.575392 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:03:01.575396 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:03:01.575401 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:03:01.575406 | orchestrator | 2025-08-29 15:03:01.575411 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-08-29 15:03:01.575415 | orchestrator | Friday 29 August 2025 15:01:38 +0000 (0:00:01.794) 0:10:26.720 ********* 2025-08-29 15:03:01.575420 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:03:01.575425 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:03:01.575430 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:03:01.575435 | orchestrator | 2025-08-29 15:03:01.575439 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-08-29 15:03:01.575444 | orchestrator | Friday 29 August 2025 15:01:40 +0000 (0:00:02.299) 0:10:29.020 ********* 2025-08-29 15:03:01.575452 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.575457 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.575462 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.575467 | orchestrator | 2025-08-29 15:03:01.575472 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-08-29 15:03:01.575477 | orchestrator | Friday 29 August 2025 15:01:41 +0000 (0:00:01.232) 0:10:30.252 ********* 2025-08-29 15:03:01.575481 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:03:01.575486 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:03:01.575491 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:03:01.575496 | orchestrator | 2025-08-29 15:03:01.575501 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-08-29 15:03:01.575505 | orchestrator | Friday 29 August 2025 15:01:42 +0000 (0:00:00.945) 0:10:31.198 ********* 2025-08-29 15:03:01.575510 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:03:01.575519 | orchestrator | 2025-08-29 15:03:01.575524 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-08-29 15:03:01.575529 | orchestrator | Friday 29 August 2025 15:01:43 +0000 (0:00:00.601) 0:10:31.799 ********* 2025-08-29 15:03:01.575533 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.575538 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.575543 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.575548 | orchestrator | 2025-08-29 15:03:01.575553 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-08-29 15:03:01.575557 | orchestrator | Friday 29 August 2025 15:01:43 +0000 (0:00:00.303) 0:10:32.103 ********* 2025-08-29 15:03:01.575562 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:03:01.575567 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:03:01.575572 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:03:01.575577 | orchestrator | 2025-08-29 15:03:01.575581 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-08-29 15:03:01.575586 | orchestrator | Friday 29 August 2025 15:01:45 +0000 (0:00:01.497) 0:10:33.601 ********* 2025-08-29 15:03:01.575591 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 15:03:01.575596 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 15:03:01.575601 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 15:03:01.575606 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.575610 | orchestrator | 2025-08-29 15:03:01.575615 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-08-29 15:03:01.575623 | orchestrator | Friday 29 August 2025 15:01:45 +0000 (0:00:00.724) 0:10:34.325 ********* 2025-08-29 15:03:01.575628 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.575633 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.575637 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.575642 | orchestrator | 2025-08-29 15:03:01.575647 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-08-29 15:03:01.575652 | orchestrator | 2025-08-29 15:03:01.575669 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-08-29 15:03:01.575674 | orchestrator | Friday 29 August 2025 15:01:46 +0000 (0:00:00.635) 0:10:34.961 ********* 2025-08-29 15:03:01.575679 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:03:01.575684 | orchestrator | 2025-08-29 15:03:01.575689 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-08-29 15:03:01.575694 | orchestrator | Friday 29 August 2025 15:01:47 +0000 (0:00:00.742) 0:10:35.703 ********* 2025-08-29 15:03:01.575699 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:03:01.575703 | orchestrator | 2025-08-29 15:03:01.575708 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-08-29 15:03:01.575713 | orchestrator | Friday 29 August 2025 15:01:47 +0000 (0:00:00.513) 0:10:36.217 ********* 2025-08-29 15:03:01.575718 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.575723 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.575727 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.575732 | orchestrator | 2025-08-29 15:03:01.575737 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-08-29 15:03:01.575742 | orchestrator | Friday 29 August 2025 15:01:48 +0000 (0:00:00.526) 0:10:36.743 ********* 2025-08-29 15:03:01.575747 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.575751 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.575756 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.575761 | orchestrator | 2025-08-29 15:03:01.575766 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-08-29 15:03:01.575770 | orchestrator | Friday 29 August 2025 15:01:48 +0000 (0:00:00.715) 0:10:37.458 ********* 2025-08-29 15:03:01.575775 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.575780 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.575789 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.575794 | orchestrator | 2025-08-29 15:03:01.575798 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-08-29 15:03:01.575803 | orchestrator | Friday 29 August 2025 15:01:49 +0000 (0:00:00.727) 0:10:38.186 ********* 2025-08-29 15:03:01.575808 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.575813 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.575817 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.575822 | orchestrator | 2025-08-29 15:03:01.575827 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-08-29 15:03:01.575832 | orchestrator | Friday 29 August 2025 15:01:50 +0000 (0:00:00.745) 0:10:38.932 ********* 2025-08-29 15:03:01.575837 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.575841 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.575846 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.575851 | orchestrator | 2025-08-29 15:03:01.575856 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-08-29 15:03:01.575860 | orchestrator | Friday 29 August 2025 15:01:51 +0000 (0:00:00.577) 0:10:39.510 ********* 2025-08-29 15:03:01.575865 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.575870 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.575875 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.575883 | orchestrator | 2025-08-29 15:03:01.575888 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-08-29 15:03:01.575892 | orchestrator | Friday 29 August 2025 15:01:51 +0000 (0:00:00.326) 0:10:39.836 ********* 2025-08-29 15:03:01.575897 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.575902 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.575907 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.575912 | orchestrator | 2025-08-29 15:03:01.575916 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-08-29 15:03:01.575921 | orchestrator | Friday 29 August 2025 15:01:51 +0000 (0:00:00.323) 0:10:40.159 ********* 2025-08-29 15:03:01.575926 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.575931 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.575936 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.575940 | orchestrator | 2025-08-29 15:03:01.575945 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-08-29 15:03:01.575950 | orchestrator | Friday 29 August 2025 15:01:52 +0000 (0:00:00.723) 0:10:40.883 ********* 2025-08-29 15:03:01.575955 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.575963 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.575970 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.575978 | orchestrator | 2025-08-29 15:03:01.575984 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-08-29 15:03:01.575992 | orchestrator | Friday 29 August 2025 15:01:53 +0000 (0:00:00.972) 0:10:41.855 ********* 2025-08-29 15:03:01.576000 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.576008 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.576017 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.576034 | orchestrator | 2025-08-29 15:03:01.576042 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-08-29 15:03:01.576050 | orchestrator | Friday 29 August 2025 15:01:53 +0000 (0:00:00.324) 0:10:42.180 ********* 2025-08-29 15:03:01.576058 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.576066 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.576073 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.576081 | orchestrator | 2025-08-29 15:03:01.576088 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-08-29 15:03:01.576096 | orchestrator | Friday 29 August 2025 15:01:54 +0000 (0:00:00.333) 0:10:42.513 ********* 2025-08-29 15:03:01.576104 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.576111 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.576119 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.576135 | orchestrator | 2025-08-29 15:03:01.576148 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-08-29 15:03:01.576157 | orchestrator | Friday 29 August 2025 15:01:54 +0000 (0:00:00.376) 0:10:42.890 ********* 2025-08-29 15:03:01.576165 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.576174 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.576179 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.576184 | orchestrator | 2025-08-29 15:03:01.576189 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-08-29 15:03:01.576194 | orchestrator | Friday 29 August 2025 15:01:54 +0000 (0:00:00.589) 0:10:43.480 ********* 2025-08-29 15:03:01.576198 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.576203 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.576208 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.576213 | orchestrator | 2025-08-29 15:03:01.576218 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-08-29 15:03:01.576222 | orchestrator | Friday 29 August 2025 15:01:55 +0000 (0:00:00.386) 0:10:43.866 ********* 2025-08-29 15:03:01.576227 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.576232 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.576237 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.576241 | orchestrator | 2025-08-29 15:03:01.576246 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-08-29 15:03:01.576251 | orchestrator | Friday 29 August 2025 15:01:55 +0000 (0:00:00.368) 0:10:44.234 ********* 2025-08-29 15:03:01.576256 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.576260 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.576265 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.576270 | orchestrator | 2025-08-29 15:03:01.576275 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-08-29 15:03:01.576279 | orchestrator | Friday 29 August 2025 15:01:56 +0000 (0:00:00.313) 0:10:44.548 ********* 2025-08-29 15:03:01.576284 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.576289 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.576294 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.576298 | orchestrator | 2025-08-29 15:03:01.576303 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-08-29 15:03:01.576308 | orchestrator | Friday 29 August 2025 15:01:56 +0000 (0:00:00.575) 0:10:45.124 ********* 2025-08-29 15:03:01.576313 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.576317 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.576322 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.576327 | orchestrator | 2025-08-29 15:03:01.576332 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-08-29 15:03:01.576336 | orchestrator | Friday 29 August 2025 15:01:56 +0000 (0:00:00.334) 0:10:45.458 ********* 2025-08-29 15:03:01.576341 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.576346 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.576351 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.576355 | orchestrator | 2025-08-29 15:03:01.576360 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-08-29 15:03:01.576365 | orchestrator | Friday 29 August 2025 15:01:57 +0000 (0:00:00.565) 0:10:46.024 ********* 2025-08-29 15:03:01.576370 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:03:01.576375 | orchestrator | 2025-08-29 15:03:01.576379 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-08-29 15:03:01.576384 | orchestrator | Friday 29 August 2025 15:01:58 +0000 (0:00:00.780) 0:10:46.804 ********* 2025-08-29 15:03:01.576389 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:03:01.576394 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-08-29 15:03:01.576404 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-08-29 15:03:01.576412 | orchestrator | 2025-08-29 15:03:01.576420 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-08-29 15:03:01.576434 | orchestrator | Friday 29 August 2025 15:02:00 +0000 (0:00:02.071) 0:10:48.876 ********* 2025-08-29 15:03:01.576442 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-08-29 15:03:01.576449 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-08-29 15:03:01.576458 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:03:01.576465 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-08-29 15:03:01.576473 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-08-29 15:03:01.576480 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:03:01.576488 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-08-29 15:03:01.576496 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-08-29 15:03:01.576505 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:03:01.576510 | orchestrator | 2025-08-29 15:03:01.576515 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-08-29 15:03:01.576520 | orchestrator | Friday 29 August 2025 15:02:01 +0000 (0:00:01.240) 0:10:50.116 ********* 2025-08-29 15:03:01.576524 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.576529 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.576534 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.576539 | orchestrator | 2025-08-29 15:03:01.576543 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-08-29 15:03:01.576548 | orchestrator | Friday 29 August 2025 15:02:01 +0000 (0:00:00.331) 0:10:50.448 ********* 2025-08-29 15:03:01.576553 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:03:01.576558 | orchestrator | 2025-08-29 15:03:01.576562 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-08-29 15:03:01.576567 | orchestrator | Friday 29 August 2025 15:02:02 +0000 (0:00:00.804) 0:10:51.252 ********* 2025-08-29 15:03:01.576572 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-08-29 15:03:01.576581 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-08-29 15:03:01.576586 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-08-29 15:03:01.576591 | orchestrator | 2025-08-29 15:03:01.576596 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-08-29 15:03:01.576601 | orchestrator | Friday 29 August 2025 15:02:03 +0000 (0:00:00.814) 0:10:52.067 ********* 2025-08-29 15:03:01.576605 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:03:01.576610 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-08-29 15:03:01.576615 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:03:01.576620 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-08-29 15:03:01.576624 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:03:01.576630 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-08-29 15:03:01.576634 | orchestrator | 2025-08-29 15:03:01.576639 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-08-29 15:03:01.576644 | orchestrator | Friday 29 August 2025 15:02:07 +0000 (0:00:04.266) 0:10:56.334 ********* 2025-08-29 15:03:01.576649 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:03:01.576653 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-08-29 15:03:01.576697 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:03:01.576702 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-08-29 15:03:01.576707 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:03:01.576712 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-08-29 15:03:01.576717 | orchestrator | 2025-08-29 15:03:01.576721 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-08-29 15:03:01.576726 | orchestrator | Friday 29 August 2025 15:02:10 +0000 (0:00:02.923) 0:10:59.258 ********* 2025-08-29 15:03:01.576731 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-08-29 15:03:01.576735 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:03:01.576740 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-08-29 15:03:01.576745 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:03:01.576750 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-08-29 15:03:01.576755 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:03:01.576759 | orchestrator | 2025-08-29 15:03:01.576764 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-08-29 15:03:01.576770 | orchestrator | Friday 29 August 2025 15:02:12 +0000 (0:00:01.305) 0:11:00.563 ********* 2025-08-29 15:03:01.576779 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-08-29 15:03:01.576787 | orchestrator | 2025-08-29 15:03:01.576795 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-08-29 15:03:01.576808 | orchestrator | Friday 29 August 2025 15:02:12 +0000 (0:00:00.237) 0:11:00.801 ********* 2025-08-29 15:03:01.576817 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 15:03:01.576824 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 15:03:01.576832 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 15:03:01.576840 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 15:03:01.576847 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 15:03:01.576855 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.576862 | orchestrator | 2025-08-29 15:03:01.576870 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-08-29 15:03:01.576878 | orchestrator | Friday 29 August 2025 15:02:12 +0000 (0:00:00.598) 0:11:01.399 ********* 2025-08-29 15:03:01.576886 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 15:03:01.576893 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 15:03:01.576901 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 15:03:01.576909 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 15:03:01.576914 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-08-29 15:03:01.576923 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.576928 | orchestrator | 2025-08-29 15:03:01.576932 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-08-29 15:03:01.576937 | orchestrator | Friday 29 August 2025 15:02:13 +0000 (0:00:00.586) 0:11:01.986 ********* 2025-08-29 15:03:01.576941 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-08-29 15:03:01.576953 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-08-29 15:03:01.576958 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-08-29 15:03:01.576962 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-08-29 15:03:01.576967 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-08-29 15:03:01.576971 | orchestrator | 2025-08-29 15:03:01.576976 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-08-29 15:03:01.576980 | orchestrator | Friday 29 August 2025 15:02:45 +0000 (0:00:32.108) 0:11:34.094 ********* 2025-08-29 15:03:01.576985 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.576989 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.576994 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.576998 | orchestrator | 2025-08-29 15:03:01.577003 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-08-29 15:03:01.577010 | orchestrator | Friday 29 August 2025 15:02:45 +0000 (0:00:00.331) 0:11:34.426 ********* 2025-08-29 15:03:01.577018 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.577025 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.577031 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.577038 | orchestrator | 2025-08-29 15:03:01.577044 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-08-29 15:03:01.577051 | orchestrator | Friday 29 August 2025 15:02:46 +0000 (0:00:00.645) 0:11:35.072 ********* 2025-08-29 15:03:01.577058 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:03:01.577065 | orchestrator | 2025-08-29 15:03:01.577072 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-08-29 15:03:01.577080 | orchestrator | Friday 29 August 2025 15:02:47 +0000 (0:00:00.581) 0:11:35.653 ********* 2025-08-29 15:03:01.577088 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:03:01.577095 | orchestrator | 2025-08-29 15:03:01.577103 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-08-29 15:03:01.577109 | orchestrator | Friday 29 August 2025 15:02:47 +0000 (0:00:00.785) 0:11:36.438 ********* 2025-08-29 15:03:01.577113 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:03:01.577118 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:03:01.577122 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:03:01.577127 | orchestrator | 2025-08-29 15:03:01.577135 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-08-29 15:03:01.577141 | orchestrator | Friday 29 August 2025 15:02:49 +0000 (0:00:01.327) 0:11:37.766 ********* 2025-08-29 15:03:01.577148 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:03:01.577155 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:03:01.577163 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:03:01.577170 | orchestrator | 2025-08-29 15:03:01.577178 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-08-29 15:03:01.577185 | orchestrator | Friday 29 August 2025 15:02:50 +0000 (0:00:01.200) 0:11:38.967 ********* 2025-08-29 15:03:01.577191 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:03:01.577198 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:03:01.577206 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:03:01.577214 | orchestrator | 2025-08-29 15:03:01.577221 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-08-29 15:03:01.577235 | orchestrator | Friday 29 August 2025 15:02:52 +0000 (0:00:01.852) 0:11:40.819 ********* 2025-08-29 15:03:01.577244 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-08-29 15:03:01.577251 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-08-29 15:03:01.577259 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-08-29 15:03:01.577266 | orchestrator | 2025-08-29 15:03:01.577273 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-08-29 15:03:01.577281 | orchestrator | Friday 29 August 2025 15:02:55 +0000 (0:00:02.687) 0:11:43.506 ********* 2025-08-29 15:03:01.577289 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.577296 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.577304 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.577314 | orchestrator | 2025-08-29 15:03:01.577319 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-08-29 15:03:01.577323 | orchestrator | Friday 29 August 2025 15:02:55 +0000 (0:00:00.370) 0:11:43.876 ********* 2025-08-29 15:03:01.577333 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:03:01.577338 | orchestrator | 2025-08-29 15:03:01.577342 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-08-29 15:03:01.577347 | orchestrator | Friday 29 August 2025 15:02:56 +0000 (0:00:00.876) 0:11:44.753 ********* 2025-08-29 15:03:01.577351 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.577356 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.577360 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.577365 | orchestrator | 2025-08-29 15:03:01.577369 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-08-29 15:03:01.577374 | orchestrator | Friday 29 August 2025 15:02:56 +0000 (0:00:00.332) 0:11:45.086 ********* 2025-08-29 15:03:01.577378 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.577383 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:03:01.577387 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:03:01.577392 | orchestrator | 2025-08-29 15:03:01.577396 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-08-29 15:03:01.577401 | orchestrator | Friday 29 August 2025 15:02:56 +0000 (0:00:00.338) 0:11:45.424 ********* 2025-08-29 15:03:01.577405 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 15:03:01.577410 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 15:03:01.577415 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 15:03:01.577419 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:03:01.577424 | orchestrator | 2025-08-29 15:03:01.577431 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-08-29 15:03:01.577438 | orchestrator | Friday 29 August 2025 15:02:58 +0000 (0:00:01.129) 0:11:46.554 ********* 2025-08-29 15:03:01.577445 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:03:01.577453 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:03:01.577461 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:03:01.577468 | orchestrator | 2025-08-29 15:03:01.577475 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:03:01.577482 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2025-08-29 15:03:01.577488 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-08-29 15:03:01.577493 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-08-29 15:03:01.577502 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2025-08-29 15:03:01.577507 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-08-29 15:03:01.577511 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-08-29 15:03:01.577516 | orchestrator | 2025-08-29 15:03:01.577521 | orchestrator | 2025-08-29 15:03:01.577525 | orchestrator | 2025-08-29 15:03:01.577530 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:03:01.577534 | orchestrator | Friday 29 August 2025 15:02:58 +0000 (0:00:00.259) 0:11:46.813 ********* 2025-08-29 15:03:01.577542 | orchestrator | =============================================================================== 2025-08-29 15:03:01.577547 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 86.30s 2025-08-29 15:03:01.577551 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 43.43s 2025-08-29 15:03:01.577556 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 32.11s 2025-08-29 15:03:01.577560 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 30.72s 2025-08-29 15:03:01.577565 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.46s 2025-08-29 15:03:01.577569 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 13.14s 2025-08-29 15:03:01.577574 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.61s 2025-08-29 15:03:01.577578 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.84s 2025-08-29 15:03:01.577583 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 7.81s 2025-08-29 15:03:01.577587 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 7.51s 2025-08-29 15:03:01.577592 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.53s 2025-08-29 15:03:01.577596 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 6.38s 2025-08-29 15:03:01.577601 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.75s 2025-08-29 15:03:01.577605 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.27s 2025-08-29 15:03:01.577610 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 3.98s 2025-08-29 15:03:01.577614 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.78s 2025-08-29 15:03:01.577619 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 3.71s 2025-08-29 15:03:01.577623 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.70s 2025-08-29 15:03:01.577628 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.67s 2025-08-29 15:03:01.577632 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.66s 2025-08-29 15:03:01.577640 | orchestrator | 2025-08-29 15:03:01 | INFO  | Task 2ec287c2-cdb8-481f-80f8-8cd56f96c3e8 is in state STARTED 2025-08-29 15:03:01.577645 | orchestrator | 2025-08-29 15:03:01 | INFO  | Task 218cee0b-e303-4cbd-b2ed-3429b878ccf8 is in state STARTED 2025-08-29 15:03:01.577649 | orchestrator | 2025-08-29 15:03:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:04.602895 | orchestrator | 2025-08-29 15:03:04 | INFO  | Task f9532f67-2c48-40a8-983e-5cb9fdd5a371 is in state STARTED 2025-08-29 15:03:04.605400 | orchestrator | 2025-08-29 15:03:04 | INFO  | Task 2ec287c2-cdb8-481f-80f8-8cd56f96c3e8 is in state STARTED 2025-08-29 15:03:04.607387 | orchestrator | 2025-08-29 15:03:04 | INFO  | Task 218cee0b-e303-4cbd-b2ed-3429b878ccf8 is in state STARTED 2025-08-29 15:03:04.607514 | orchestrator | 2025-08-29 15:03:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:07.654639 | orchestrator | 2025-08-29 15:03:07 | INFO  | Task f9532f67-2c48-40a8-983e-5cb9fdd5a371 is in state STARTED 2025-08-29 15:03:07.657420 | orchestrator | 2025-08-29 15:03:07 | INFO  | Task 2ec287c2-cdb8-481f-80f8-8cd56f96c3e8 is in state STARTED 2025-08-29 15:03:07.659514 | orchestrator | 2025-08-29 15:03:07 | INFO  | Task 218cee0b-e303-4cbd-b2ed-3429b878ccf8 is in state STARTED 2025-08-29 15:03:07.659553 | orchestrator | 2025-08-29 15:03:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:10.699913 | orchestrator | 2025-08-29 15:03:10 | INFO  | Task f9532f67-2c48-40a8-983e-5cb9fdd5a371 is in state STARTED 2025-08-29 15:03:10.701559 | orchestrator | 2025-08-29 15:03:10 | INFO  | Task 2ec287c2-cdb8-481f-80f8-8cd56f96c3e8 is in state STARTED 2025-08-29 15:03:10.704215 | orchestrator | 2025-08-29 15:03:10 | INFO  | Task 218cee0b-e303-4cbd-b2ed-3429b878ccf8 is in state STARTED 2025-08-29 15:03:10.704295 | orchestrator | 2025-08-29 15:03:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:13.762228 | orchestrator | 2025-08-29 15:03:13 | INFO  | Task f9532f67-2c48-40a8-983e-5cb9fdd5a371 is in state STARTED 2025-08-29 15:03:13.763788 | orchestrator | 2025-08-29 15:03:13 | INFO  | Task 2ec287c2-cdb8-481f-80f8-8cd56f96c3e8 is in state STARTED 2025-08-29 15:03:13.765279 | orchestrator | 2025-08-29 15:03:13 | INFO  | Task 218cee0b-e303-4cbd-b2ed-3429b878ccf8 is in state STARTED 2025-08-29 15:03:13.765332 | orchestrator | 2025-08-29 15:03:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:16.808849 | orchestrator | 2025-08-29 15:03:16 | INFO  | Task f9532f67-2c48-40a8-983e-5cb9fdd5a371 is in state STARTED 2025-08-29 15:03:16.810494 | orchestrator | 2025-08-29 15:03:16 | INFO  | Task 2ec287c2-cdb8-481f-80f8-8cd56f96c3e8 is in state STARTED 2025-08-29 15:03:16.812964 | orchestrator | 2025-08-29 15:03:16 | INFO  | Task 218cee0b-e303-4cbd-b2ed-3429b878ccf8 is in state STARTED 2025-08-29 15:03:16.813127 | orchestrator | 2025-08-29 15:03:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:19.859699 | orchestrator | 2025-08-29 15:03:19 | INFO  | Task f9532f67-2c48-40a8-983e-5cb9fdd5a371 is in state STARTED 2025-08-29 15:03:19.863145 | orchestrator | 2025-08-29 15:03:19 | INFO  | Task 2ec287c2-cdb8-481f-80f8-8cd56f96c3e8 is in state STARTED 2025-08-29 15:03:19.865409 | orchestrator | 2025-08-29 15:03:19 | INFO  | Task 218cee0b-e303-4cbd-b2ed-3429b878ccf8 is in state STARTED 2025-08-29 15:03:19.865459 | orchestrator | 2025-08-29 15:03:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:22.908772 | orchestrator | 2025-08-29 15:03:22 | INFO  | Task f9532f67-2c48-40a8-983e-5cb9fdd5a371 is in state STARTED 2025-08-29 15:03:22.911042 | orchestrator | 2025-08-29 15:03:22 | INFO  | Task 2ec287c2-cdb8-481f-80f8-8cd56f96c3e8 is in state STARTED 2025-08-29 15:03:22.912909 | orchestrator | 2025-08-29 15:03:22 | INFO  | Task 218cee0b-e303-4cbd-b2ed-3429b878ccf8 is in state STARTED 2025-08-29 15:03:22.913156 | orchestrator | 2025-08-29 15:03:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:25.954877 | orchestrator | 2025-08-29 15:03:25 | INFO  | Task f9532f67-2c48-40a8-983e-5cb9fdd5a371 is in state STARTED 2025-08-29 15:03:25.955767 | orchestrator | 2025-08-29 15:03:25 | INFO  | Task 2ec287c2-cdb8-481f-80f8-8cd56f96c3e8 is in state STARTED 2025-08-29 15:03:25.956704 | orchestrator | 2025-08-29 15:03:25 | INFO  | Task 218cee0b-e303-4cbd-b2ed-3429b878ccf8 is in state STARTED 2025-08-29 15:03:25.956716 | orchestrator | 2025-08-29 15:03:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:29.002166 | orchestrator | 2025-08-29 15:03:29 | INFO  | Task f9532f67-2c48-40a8-983e-5cb9fdd5a371 is in state STARTED 2025-08-29 15:03:29.004044 | orchestrator | 2025-08-29 15:03:29 | INFO  | Task 2ec287c2-cdb8-481f-80f8-8cd56f96c3e8 is in state STARTED 2025-08-29 15:03:29.005676 | orchestrator | 2025-08-29 15:03:29 | INFO  | Task 218cee0b-e303-4cbd-b2ed-3429b878ccf8 is in state STARTED 2025-08-29 15:03:29.006785 | orchestrator | 2025-08-29 15:03:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:32.049968 | orchestrator | 2025-08-29 15:03:32 | INFO  | Task f9532f67-2c48-40a8-983e-5cb9fdd5a371 is in state STARTED 2025-08-29 15:03:32.050836 | orchestrator | 2025-08-29 15:03:32 | INFO  | Task 2ec287c2-cdb8-481f-80f8-8cd56f96c3e8 is in state STARTED 2025-08-29 15:03:32.051970 | orchestrator | 2025-08-29 15:03:32 | INFO  | Task 218cee0b-e303-4cbd-b2ed-3429b878ccf8 is in state STARTED 2025-08-29 15:03:32.051998 | orchestrator | 2025-08-29 15:03:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:35.091851 | orchestrator | 2025-08-29 15:03:35 | INFO  | Task f9532f67-2c48-40a8-983e-5cb9fdd5a371 is in state STARTED 2025-08-29 15:03:35.093664 | orchestrator | 2025-08-29 15:03:35 | INFO  | Task 2ec287c2-cdb8-481f-80f8-8cd56f96c3e8 is in state STARTED 2025-08-29 15:03:35.095067 | orchestrator | 2025-08-29 15:03:35 | INFO  | Task 218cee0b-e303-4cbd-b2ed-3429b878ccf8 is in state STARTED 2025-08-29 15:03:35.095136 | orchestrator | 2025-08-29 15:03:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:38.146842 | orchestrator | 2025-08-29 15:03:38 | INFO  | Task f9532f67-2c48-40a8-983e-5cb9fdd5a371 is in state STARTED 2025-08-29 15:03:38.148911 | orchestrator | 2025-08-29 15:03:38 | INFO  | Task 2ec287c2-cdb8-481f-80f8-8cd56f96c3e8 is in state STARTED 2025-08-29 15:03:38.152219 | orchestrator | 2025-08-29 15:03:38 | INFO  | Task 218cee0b-e303-4cbd-b2ed-3429b878ccf8 is in state STARTED 2025-08-29 15:03:38.152560 | orchestrator | 2025-08-29 15:03:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:41.206389 | orchestrator | 2025-08-29 15:03:41 | INFO  | Task f9532f67-2c48-40a8-983e-5cb9fdd5a371 is in state STARTED 2025-08-29 15:03:41.208605 | orchestrator | 2025-08-29 15:03:41 | INFO  | Task 2ec287c2-cdb8-481f-80f8-8cd56f96c3e8 is in state STARTED 2025-08-29 15:03:41.209366 | orchestrator | 2025-08-29 15:03:41 | INFO  | Task 218cee0b-e303-4cbd-b2ed-3429b878ccf8 is in state STARTED 2025-08-29 15:03:41.209712 | orchestrator | 2025-08-29 15:03:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:44.266884 | orchestrator | 2025-08-29 15:03:44 | INFO  | Task f9532f67-2c48-40a8-983e-5cb9fdd5a371 is in state STARTED 2025-08-29 15:03:44.267788 | orchestrator | 2025-08-29 15:03:44 | INFO  | Task 2ec287c2-cdb8-481f-80f8-8cd56f96c3e8 is in state STARTED 2025-08-29 15:03:44.269237 | orchestrator | 2025-08-29 15:03:44 | INFO  | Task 218cee0b-e303-4cbd-b2ed-3429b878ccf8 is in state STARTED 2025-08-29 15:03:44.269511 | orchestrator | 2025-08-29 15:03:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:47.305903 | orchestrator | 2025-08-29 15:03:47 | INFO  | Task f9532f67-2c48-40a8-983e-5cb9fdd5a371 is in state STARTED 2025-08-29 15:03:47.308152 | orchestrator | 2025-08-29 15:03:47 | INFO  | Task 2ec287c2-cdb8-481f-80f8-8cd56f96c3e8 is in state STARTED 2025-08-29 15:03:47.310405 | orchestrator | 2025-08-29 15:03:47 | INFO  | Task 218cee0b-e303-4cbd-b2ed-3429b878ccf8 is in state STARTED 2025-08-29 15:03:47.310471 | orchestrator | 2025-08-29 15:03:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:50.354001 | orchestrator | 2025-08-29 15:03:50 | INFO  | Task f9532f67-2c48-40a8-983e-5cb9fdd5a371 is in state STARTED 2025-08-29 15:03:50.356345 | orchestrator | 2025-08-29 15:03:50 | INFO  | Task 2ec287c2-cdb8-481f-80f8-8cd56f96c3e8 is in state STARTED 2025-08-29 15:03:50.358939 | orchestrator | 2025-08-29 15:03:50 | INFO  | Task 218cee0b-e303-4cbd-b2ed-3429b878ccf8 is in state STARTED 2025-08-29 15:03:50.359000 | orchestrator | 2025-08-29 15:03:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:53.407228 | orchestrator | 2025-08-29 15:03:53 | INFO  | Task f9532f67-2c48-40a8-983e-5cb9fdd5a371 is in state STARTED 2025-08-29 15:03:53.410990 | orchestrator | 2025-08-29 15:03:53 | INFO  | Task 2ec287c2-cdb8-481f-80f8-8cd56f96c3e8 is in state STARTED 2025-08-29 15:03:53.414075 | orchestrator | 2025-08-29 15:03:53 | INFO  | Task 218cee0b-e303-4cbd-b2ed-3429b878ccf8 is in state STARTED 2025-08-29 15:03:53.414129 | orchestrator | 2025-08-29 15:03:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:56.469968 | orchestrator | 2025-08-29 15:03:56 | INFO  | Task f9532f67-2c48-40a8-983e-5cb9fdd5a371 is in state SUCCESS 2025-08-29 15:03:56.470875 | orchestrator | 2025-08-29 15:03:56.470912 | orchestrator | 2025-08-29 15:03:56.470922 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:03:56.470932 | orchestrator | 2025-08-29 15:03:56.470941 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:03:56.471113 | orchestrator | Friday 29 August 2025 15:00:55 +0000 (0:00:00.315) 0:00:00.315 ********* 2025-08-29 15:03:56.471125 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:56.471135 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:03:56.471144 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:03:56.471153 | orchestrator | 2025-08-29 15:03:56.471162 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:03:56.471171 | orchestrator | Friday 29 August 2025 15:00:56 +0000 (0:00:00.312) 0:00:00.628 ********* 2025-08-29 15:03:56.471180 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-08-29 15:03:56.471189 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-08-29 15:03:56.471198 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-08-29 15:03:56.471207 | orchestrator | 2025-08-29 15:03:56.471215 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-08-29 15:03:56.471224 | orchestrator | 2025-08-29 15:03:56.471233 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-08-29 15:03:56.471241 | orchestrator | Friday 29 August 2025 15:00:56 +0000 (0:00:00.422) 0:00:01.051 ********* 2025-08-29 15:03:56.471250 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:03:56.471259 | orchestrator | 2025-08-29 15:03:56.471268 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-08-29 15:03:56.471277 | orchestrator | Friday 29 August 2025 15:00:56 +0000 (0:00:00.506) 0:00:01.557 ********* 2025-08-29 15:03:56.471286 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-08-29 15:03:56.471295 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-08-29 15:03:56.471304 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-08-29 15:03:56.471313 | orchestrator | 2025-08-29 15:03:56.471321 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-08-29 15:03:56.471330 | orchestrator | Friday 29 August 2025 15:00:58 +0000 (0:00:01.652) 0:00:03.210 ********* 2025-08-29 15:03:56.471357 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 15:03:56.471390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 15:03:56.471411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 15:03:56.471422 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 15:03:56.471434 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 15:03:56.471455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 15:03:56.471465 | orchestrator | 2025-08-29 15:03:56.471475 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-08-29 15:03:56.471484 | orchestrator | Friday 29 August 2025 15:01:00 +0000 (0:00:01.874) 0:00:05.085 ********* 2025-08-29 15:03:56.471492 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:03:56.471501 | orchestrator | 2025-08-29 15:03:56.471509 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-08-29 15:03:56.471518 | orchestrator | Friday 29 August 2025 15:01:01 +0000 (0:00:00.568) 0:00:05.653 ********* 2025-08-29 15:03:56.471536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 15:03:56.471546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 15:03:56.471556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 15:03:56.471577 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 15:03:56.471617 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 15:03:56.471630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 15:03:56.471640 | orchestrator | 2025-08-29 15:03:56.471649 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-08-29 15:03:56.471657 | orchestrator | Friday 29 August 2025 15:01:04 +0000 (0:00:03.099) 0:00:08.752 ********* 2025-08-29 15:03:56.471667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 15:03:56.471686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 15:03:56.471697 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:56.471706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 15:03:56.471723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 15:03:56.471733 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:56.471745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 15:03:56.471765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 15:03:56.471777 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:56.471787 | orchestrator | 2025-08-29 15:03:56.471797 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-08-29 15:03:56.471806 | orchestrator | Friday 29 August 2025 15:01:05 +0000 (0:00:01.151) 0:00:09.903 ********* 2025-08-29 15:03:56.471815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 15:03:56.471831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 15:03:56.471841 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:56.471856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 15:03:56.471871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 15:03:56.471881 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:56.471890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-08-29 15:03:56.471907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-08-29 15:03:56.471917 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:56.471926 | orchestrator | 2025-08-29 15:03:56.472025 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-08-29 15:03:56.472038 | orchestrator | Friday 29 August 2025 15:01:06 +0000 (0:00:01.402) 0:00:11.306 ********* 2025-08-29 15:03:56.472047 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 15:03:56.472062 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 15:03:56.472072 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 15:03:56.472149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 15:03:56.472163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 15:03:56.472190 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 15:03:56.472200 | orchestrator | 2025-08-29 15:03:56.472223 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-08-29 15:03:56.472242 | orchestrator | Friday 29 August 2025 15:01:09 +0000 (0:00:02.744) 0:00:14.050 ********* 2025-08-29 15:03:56.472251 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:03:56.472260 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:03:56.472269 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:03:56.472277 | orchestrator | 2025-08-29 15:03:56.472286 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-08-29 15:03:56.472295 | orchestrator | Friday 29 August 2025 15:01:12 +0000 (0:00:02.704) 0:00:16.755 ********* 2025-08-29 15:03:56.472303 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:03:56.472312 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:03:56.472320 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:03:56.472329 | orchestrator | 2025-08-29 15:03:56.472337 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-08-29 15:03:56.472346 | orchestrator | Friday 29 August 2025 15:01:14 +0000 (0:00:02.146) 0:00:18.902 ********* 2025-08-29 15:03:56.472355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 15:03:56.472370 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 15:03:56.472390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-08-29 15:03:56.472404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 15:03:56.472415 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 15:03:56.472431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-08-29 15:03:56.472446 | orchestrator | 2025-08-29 15:03:56.472456 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-08-29 15:03:56.472464 | orchestrator | Friday 29 August 2025 15:01:16 +0000 (0:00:02.094) 0:00:20.996 ********* 2025-08-29 15:03:56.472473 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:56.472481 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:03:56.472490 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:03:56.472499 | orchestrator | 2025-08-29 15:03:56.472508 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-08-29 15:03:56.472516 | orchestrator | Friday 29 August 2025 15:01:16 +0000 (0:00:00.292) 0:00:21.289 ********* 2025-08-29 15:03:56.472525 | orchestrator | 2025-08-29 15:03:56.472533 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-08-29 15:03:56.472542 | orchestrator | Friday 29 August 2025 15:01:16 +0000 (0:00:00.062) 0:00:21.351 ********* 2025-08-29 15:03:56.472550 | orchestrator | 2025-08-29 15:03:56.472559 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-08-29 15:03:56.472568 | orchestrator | Friday 29 August 2025 15:01:16 +0000 (0:00:00.080) 0:00:21.432 ********* 2025-08-29 15:03:56.472576 | orchestrator | 2025-08-29 15:03:56.472585 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-08-29 15:03:56.472593 | orchestrator | Friday 29 August 2025 15:01:16 +0000 (0:00:00.088) 0:00:21.520 ********* 2025-08-29 15:03:56.472676 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:56.472691 | orchestrator | 2025-08-29 15:03:56.472707 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-08-29 15:03:56.472718 | orchestrator | Friday 29 August 2025 15:01:17 +0000 (0:00:00.211) 0:00:21.731 ********* 2025-08-29 15:03:56.472727 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:03:56.472735 | orchestrator | 2025-08-29 15:03:56.472744 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-08-29 15:03:56.472752 | orchestrator | Friday 29 August 2025 15:01:17 +0000 (0:00:00.715) 0:00:22.447 ********* 2025-08-29 15:03:56.472761 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:03:56.472770 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:03:56.472778 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:03:56.472789 | orchestrator | 2025-08-29 15:03:56.472799 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-08-29 15:03:56.472809 | orchestrator | Friday 29 August 2025 15:02:23 +0000 (0:01:05.906) 0:01:28.354 ********* 2025-08-29 15:03:56.472819 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:03:56.472829 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:03:56.472838 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:03:56.472848 | orchestrator | 2025-08-29 15:03:56.472864 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-08-29 15:03:56.472874 | orchestrator | Friday 29 August 2025 15:03:44 +0000 (0:01:20.870) 0:02:49.225 ********* 2025-08-29 15:03:56.472884 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:03:56.472894 | orchestrator | 2025-08-29 15:03:56.472903 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-08-29 15:03:56.472914 | orchestrator | Friday 29 August 2025 15:03:45 +0000 (0:00:00.459) 0:02:49.684 ********* 2025-08-29 15:03:56.472931 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:56.472941 | orchestrator | 2025-08-29 15:03:56.472949 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-08-29 15:03:56.472958 | orchestrator | Friday 29 August 2025 15:03:47 +0000 (0:00:02.580) 0:02:52.264 ********* 2025-08-29 15:03:56.472966 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:03:56.472975 | orchestrator | 2025-08-29 15:03:56.472984 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-08-29 15:03:56.472995 | orchestrator | Friday 29 August 2025 15:03:49 +0000 (0:00:02.266) 0:02:54.531 ********* 2025-08-29 15:03:56.473005 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:03:56.473016 | orchestrator | 2025-08-29 15:03:56.473027 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-08-29 15:03:56.473037 | orchestrator | Friday 29 August 2025 15:03:52 +0000 (0:00:02.696) 0:02:57.227 ********* 2025-08-29 15:03:56.473047 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:03:56.473058 | orchestrator | 2025-08-29 15:03:56.473072 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:03:56.473092 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 15:03:56.473110 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-08-29 15:03:56.473129 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-08-29 15:03:56.473146 | orchestrator | 2025-08-29 15:03:56.473164 | orchestrator | 2025-08-29 15:03:56.473180 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:03:56.473206 | orchestrator | Friday 29 August 2025 15:03:55 +0000 (0:00:02.385) 0:02:59.613 ********* 2025-08-29 15:03:56.473224 | orchestrator | =============================================================================== 2025-08-29 15:03:56.473241 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 80.87s 2025-08-29 15:03:56.473258 | orchestrator | opensearch : Restart opensearch container ------------------------------ 65.91s 2025-08-29 15:03:56.473275 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.10s 2025-08-29 15:03:56.473293 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.74s 2025-08-29 15:03:56.473311 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.70s 2025-08-29 15:03:56.473329 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.70s 2025-08-29 15:03:56.473347 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.58s 2025-08-29 15:03:56.473365 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.39s 2025-08-29 15:03:56.473382 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.27s 2025-08-29 15:03:56.473401 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.15s 2025-08-29 15:03:56.473418 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.09s 2025-08-29 15:03:56.473437 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.87s 2025-08-29 15:03:56.473456 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 1.65s 2025-08-29 15:03:56.473474 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.40s 2025-08-29 15:03:56.473493 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.15s 2025-08-29 15:03:56.473509 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.72s 2025-08-29 15:03:56.473520 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.57s 2025-08-29 15:03:56.473530 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.51s 2025-08-29 15:03:56.473563 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.46s 2025-08-29 15:03:56.473582 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.42s 2025-08-29 15:03:56.473667 | orchestrator | 2025-08-29 15:03:56 | INFO  | Task 2ec287c2-cdb8-481f-80f8-8cd56f96c3e8 is in state STARTED 2025-08-29 15:03:56.473690 | orchestrator | 2025-08-29 15:03:56 | INFO  | Task 218cee0b-e303-4cbd-b2ed-3429b878ccf8 is in state STARTED 2025-08-29 15:03:56.473710 | orchestrator | 2025-08-29 15:03:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:03:59.528920 | orchestrator | 2025-08-29 15:03:59 | INFO  | Task 2ec287c2-cdb8-481f-80f8-8cd56f96c3e8 is in state STARTED 2025-08-29 15:03:59.531011 | orchestrator | 2025-08-29 15:03:59 | INFO  | Task 218cee0b-e303-4cbd-b2ed-3429b878ccf8 is in state STARTED 2025-08-29 15:03:59.531086 | orchestrator | 2025-08-29 15:03:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:02.571869 | orchestrator | 2025-08-29 15:04:02 | INFO  | Task 2ec287c2-cdb8-481f-80f8-8cd56f96c3e8 is in state STARTED 2025-08-29 15:04:02.574298 | orchestrator | 2025-08-29 15:04:02 | INFO  | Task 218cee0b-e303-4cbd-b2ed-3429b878ccf8 is in state STARTED 2025-08-29 15:04:02.574874 | orchestrator | 2025-08-29 15:04:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:05.618199 | orchestrator | 2025-08-29 15:04:05 | INFO  | Task 2ec287c2-cdb8-481f-80f8-8cd56f96c3e8 is in state STARTED 2025-08-29 15:04:05.618524 | orchestrator | 2025-08-29 15:04:05 | INFO  | Task 218cee0b-e303-4cbd-b2ed-3429b878ccf8 is in state STARTED 2025-08-29 15:04:05.618557 | orchestrator | 2025-08-29 15:04:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:08.659698 | orchestrator | 2025-08-29 15:04:08 | INFO  | Task 2ec287c2-cdb8-481f-80f8-8cd56f96c3e8 is in state STARTED 2025-08-29 15:04:08.660191 | orchestrator | 2025-08-29 15:04:08 | INFO  | Task 218cee0b-e303-4cbd-b2ed-3429b878ccf8 is in state STARTED 2025-08-29 15:04:08.660215 | orchestrator | 2025-08-29 15:04:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:11.719650 | orchestrator | 2025-08-29 15:04:11.719718 | orchestrator | 2025-08-29 15:04:11.719729 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-08-29 15:04:11.719738 | orchestrator | 2025-08-29 15:04:11.719746 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-08-29 15:04:11.719754 | orchestrator | Friday 29 August 2025 15:00:55 +0000 (0:00:00.106) 0:00:00.106 ********* 2025-08-29 15:04:11.719763 | orchestrator | ok: [localhost] => { 2025-08-29 15:04:11.719772 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-08-29 15:04:11.719780 | orchestrator | } 2025-08-29 15:04:11.719788 | orchestrator | 2025-08-29 15:04:11.719796 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-08-29 15:04:11.719804 | orchestrator | Friday 29 August 2025 15:00:55 +0000 (0:00:00.050) 0:00:00.156 ********* 2025-08-29 15:04:11.719813 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-08-29 15:04:11.719822 | orchestrator | ...ignoring 2025-08-29 15:04:11.719830 | orchestrator | 2025-08-29 15:04:11.719839 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-08-29 15:04:11.719848 | orchestrator | Friday 29 August 2025 15:00:58 +0000 (0:00:02.913) 0:00:03.069 ********* 2025-08-29 15:04:11.719857 | orchestrator | skipping: [localhost] 2025-08-29 15:04:11.719867 | orchestrator | 2025-08-29 15:04:11.719876 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-08-29 15:04:11.720123 | orchestrator | Friday 29 August 2025 15:00:58 +0000 (0:00:00.068) 0:00:03.138 ********* 2025-08-29 15:04:11.720141 | orchestrator | ok: [localhost] 2025-08-29 15:04:11.720170 | orchestrator | 2025-08-29 15:04:11.720181 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:04:11.720203 | orchestrator | 2025-08-29 15:04:11.720212 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:04:11.720221 | orchestrator | Friday 29 August 2025 15:00:58 +0000 (0:00:00.173) 0:00:03.311 ********* 2025-08-29 15:04:11.720230 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:04:11.720239 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:04:11.720248 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:04:11.720257 | orchestrator | 2025-08-29 15:04:11.720266 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:04:11.720275 | orchestrator | Friday 29 August 2025 15:00:59 +0000 (0:00:00.351) 0:00:03.663 ********* 2025-08-29 15:04:11.720283 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-08-29 15:04:11.720292 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-08-29 15:04:11.720301 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-08-29 15:04:11.720310 | orchestrator | 2025-08-29 15:04:11.720318 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-08-29 15:04:11.720327 | orchestrator | 2025-08-29 15:04:11.720336 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-08-29 15:04:11.720345 | orchestrator | Friday 29 August 2025 15:00:59 +0000 (0:00:00.588) 0:00:04.252 ********* 2025-08-29 15:04:11.720354 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-08-29 15:04:11.720363 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-08-29 15:04:11.720371 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-08-29 15:04:11.720379 | orchestrator | 2025-08-29 15:04:11.720386 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-08-29 15:04:11.720446 | orchestrator | Friday 29 August 2025 15:01:00 +0000 (0:00:00.403) 0:00:04.655 ********* 2025-08-29 15:04:11.720456 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:04:11.720465 | orchestrator | 2025-08-29 15:04:11.720473 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-08-29 15:04:11.720482 | orchestrator | Friday 29 August 2025 15:01:00 +0000 (0:00:00.557) 0:00:05.212 ********* 2025-08-29 15:04:11.720516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 15:04:11.720538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 15:04:11.720552 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 15:04:11.720562 | orchestrator | 2025-08-29 15:04:11.720592 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-08-29 15:04:11.720603 | orchestrator | Friday 29 August 2025 15:01:04 +0000 (0:00:03.498) 0:00:08.711 ********* 2025-08-29 15:04:11.720618 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:04:11.720628 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:04:11.720637 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:04:11.720646 | orchestrator | 2025-08-29 15:04:11.720654 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-08-29 15:04:11.720663 | orchestrator | Friday 29 August 2025 15:01:05 +0000 (0:00:00.974) 0:00:09.685 ********* 2025-08-29 15:04:11.720672 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:04:11.720681 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:04:11.720689 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:04:11.720698 | orchestrator | 2025-08-29 15:04:11.720707 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-08-29 15:04:11.720715 | orchestrator | Friday 29 August 2025 15:01:07 +0000 (0:00:01.736) 0:00:11.422 ********* 2025-08-29 15:04:11.720725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 15:04:11.720744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 15:04:11.720832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 15:04:11.720844 | orchestrator | 2025-08-29 15:04:11.720853 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-08-29 15:04:11.720862 | orchestrator | Friday 29 August 2025 15:01:10 +0000 (0:00:03.595) 0:00:15.017 ********* 2025-08-29 15:04:11.720870 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:04:11.720891 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:04:11.720899 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:04:11.720907 | orchestrator | 2025-08-29 15:04:11.720916 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-08-29 15:04:11.720924 | orchestrator | Friday 29 August 2025 15:01:12 +0000 (0:00:01.342) 0:00:16.359 ********* 2025-08-29 15:04:11.720933 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:04:11.720942 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:04:11.720951 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:04:11.720960 | orchestrator | 2025-08-29 15:04:11.720968 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-08-29 15:04:11.720976 | orchestrator | Friday 29 August 2025 15:01:16 +0000 (0:00:04.328) 0:00:20.688 ********* 2025-08-29 15:04:11.720985 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:04:11.720994 | orchestrator | 2025-08-29 15:04:11.721003 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-08-29 15:04:11.721011 | orchestrator | Friday 29 August 2025 15:01:16 +0000 (0:00:00.541) 0:00:21.229 ********* 2025-08-29 15:04:11.721032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 15:04:11.721052 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:04:11.721062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 15:04:11.721071 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:04:11.721089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 15:04:11.721104 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:04:11.721111 | orchestrator | 2025-08-29 15:04:11.721116 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-08-29 15:04:11.721121 | orchestrator | Friday 29 August 2025 15:01:19 +0000 (0:00:03.107) 0:00:24.336 ********* 2025-08-29 15:04:11.721126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 15:04:11.721132 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:04:11.721150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 15:04:11.721160 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:04:11.721165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 15:04:11.721171 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:04:11.721177 | orchestrator | 2025-08-29 15:04:11.721186 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-08-29 15:04:11.721194 | orchestrator | Friday 29 August 2025 15:01:22 +0000 (0:00:02.996) 0:00:27.332 ********* 2025-08-29 15:04:11.721205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 15:04:11.721216 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:04:11.721232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 15:04:11.721238 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:04:11.721246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-08-29 15:04:11.721254 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:04:11.721259 | orchestrator | 2025-08-29 15:04:11.721264 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-08-29 15:04:11.721269 | orchestrator | Friday 29 August 2025 15:01:26 +0000 (0:00:03.057) 0:00:30.389 ********* 2025-08-29 15:04:11.721279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 15:04:11.721285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 15:04:11.721297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-08-29 15:04:11.721303 | orchestrator | 2025-08-29 15:04:11.721308 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-08-29 15:04:11.721313 | orchestrator | Friday 29 August 2025 15:01:29 +0000 (0:00:03.525) 0:00:33.915 ********* 2025-08-29 15:04:11.721318 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:04:11.721323 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:04:11.721327 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:04:11.721332 | orchestrator | 2025-08-29 15:04:11.721337 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-08-29 15:04:11.721342 | orchestrator | Friday 29 August 2025 15:01:30 +0000 (0:00:01.096) 0:00:35.012 ********* 2025-08-29 15:04:11.721347 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:04:11.721352 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:04:11.721357 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:04:11.721361 | orchestrator | 2025-08-29 15:04:11.721366 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-08-29 15:04:11.721371 | orchestrator | Friday 29 August 2025 15:01:31 +0000 (0:00:00.594) 0:00:35.607 ********* 2025-08-29 15:04:11.721376 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:04:11.721381 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:04:11.721385 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:04:11.721394 | orchestrator | 2025-08-29 15:04:11.721399 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-08-29 15:04:11.721404 | orchestrator | Friday 29 August 2025 15:01:31 +0000 (0:00:00.335) 0:00:35.943 ********* 2025-08-29 15:04:11.721409 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-08-29 15:04:11.721414 | orchestrator | ...ignoring 2025-08-29 15:04:11.721420 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-08-29 15:04:11.721424 | orchestrator | ...ignoring 2025-08-29 15:04:11.721429 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-08-29 15:04:11.721434 | orchestrator | ...ignoring 2025-08-29 15:04:11.721439 | orchestrator | 2025-08-29 15:04:11.721444 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-08-29 15:04:11.721449 | orchestrator | Friday 29 August 2025 15:01:42 +0000 (0:00:10.860) 0:00:46.803 ********* 2025-08-29 15:04:11.721454 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:04:11.721458 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:04:11.721463 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:04:11.721468 | orchestrator | 2025-08-29 15:04:11.721516 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-08-29 15:04:11.721528 | orchestrator | Friday 29 August 2025 15:01:42 +0000 (0:00:00.443) 0:00:47.247 ********* 2025-08-29 15:04:11.721533 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:04:11.721538 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:04:11.721549 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:04:11.721561 | orchestrator | 2025-08-29 15:04:11.721572 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-08-29 15:04:11.721621 | orchestrator | Friday 29 August 2025 15:01:43 +0000 (0:00:00.663) 0:00:47.910 ********* 2025-08-29 15:04:11.721631 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:04:11.721640 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:04:11.721648 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:04:11.721656 | orchestrator | 2025-08-29 15:04:11.721663 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-08-29 15:04:11.721668 | orchestrator | Friday 29 August 2025 15:01:44 +0000 (0:00:00.441) 0:00:48.352 ********* 2025-08-29 15:04:11.721673 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:04:11.721678 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:04:11.721682 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:04:11.721687 | orchestrator | 2025-08-29 15:04:11.721692 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-08-29 15:04:11.721697 | orchestrator | Friday 29 August 2025 15:01:44 +0000 (0:00:00.491) 0:00:48.843 ********* 2025-08-29 15:04:11.721702 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:04:11.721706 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:04:11.721711 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:04:11.721716 | orchestrator | 2025-08-29 15:04:11.721721 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-08-29 15:04:11.721726 | orchestrator | Friday 29 August 2025 15:01:44 +0000 (0:00:00.408) 0:00:49.252 ********* 2025-08-29 15:04:11.721736 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:04:11.721741 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:04:11.721746 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:04:11.721751 | orchestrator | 2025-08-29 15:04:11.721756 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-08-29 15:04:11.721761 | orchestrator | Friday 29 August 2025 15:01:45 +0000 (0:00:00.713) 0:00:49.966 ********* 2025-08-29 15:04:11.721768 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:04:11.721777 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:04:11.721783 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-08-29 15:04:11.721794 | orchestrator | 2025-08-29 15:04:11.721808 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-08-29 15:04:11.721813 | orchestrator | Friday 29 August 2025 15:01:46 +0000 (0:00:00.412) 0:00:50.379 ********* 2025-08-29 15:04:11.721818 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:04:11.721823 | orchestrator | 2025-08-29 15:04:11.721828 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-08-29 15:04:11.721833 | orchestrator | Friday 29 August 2025 15:01:56 +0000 (0:00:10.552) 0:01:00.931 ********* 2025-08-29 15:04:11.721837 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:04:11.721842 | orchestrator | 2025-08-29 15:04:11.721847 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-08-29 15:04:11.721852 | orchestrator | Friday 29 August 2025 15:01:56 +0000 (0:00:00.135) 0:01:01.066 ********* 2025-08-29 15:04:11.721857 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:04:11.721862 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:04:11.721866 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:04:11.721871 | orchestrator | 2025-08-29 15:04:11.721876 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-08-29 15:04:11.721881 | orchestrator | Friday 29 August 2025 15:01:57 +0000 (0:00:01.025) 0:01:02.092 ********* 2025-08-29 15:04:11.721886 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:04:11.721890 | orchestrator | 2025-08-29 15:04:11.721895 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-08-29 15:04:11.721900 | orchestrator | Friday 29 August 2025 15:02:05 +0000 (0:00:07.986) 0:01:10.079 ********* 2025-08-29 15:04:11.721905 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:04:11.721910 | orchestrator | 2025-08-29 15:04:11.721915 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-08-29 15:04:11.721920 | orchestrator | Friday 29 August 2025 15:02:07 +0000 (0:00:01.606) 0:01:11.685 ********* 2025-08-29 15:04:11.721924 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:04:11.721929 | orchestrator | 2025-08-29 15:04:11.721934 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-08-29 15:04:11.721939 | orchestrator | Friday 29 August 2025 15:02:09 +0000 (0:00:02.506) 0:01:14.192 ********* 2025-08-29 15:04:11.721944 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:04:11.721949 | orchestrator | 2025-08-29 15:04:11.721953 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-08-29 15:04:11.721958 | orchestrator | Friday 29 August 2025 15:02:09 +0000 (0:00:00.127) 0:01:14.319 ********* 2025-08-29 15:04:11.721963 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:04:11.721968 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:04:11.721973 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:04:11.721977 | orchestrator | 2025-08-29 15:04:11.721982 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-08-29 15:04:11.721987 | orchestrator | Friday 29 August 2025 15:02:10 +0000 (0:00:00.339) 0:01:14.659 ********* 2025-08-29 15:04:11.721992 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:04:11.721997 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-08-29 15:04:11.722001 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:04:11.722006 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:04:11.722058 | orchestrator | 2025-08-29 15:04:11.722065 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-08-29 15:04:11.722075 | orchestrator | skipping: no hosts matched 2025-08-29 15:04:11.722080 | orchestrator | 2025-08-29 15:04:11.722085 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-08-29 15:04:11.722090 | orchestrator | 2025-08-29 15:04:11.722095 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-08-29 15:04:11.722100 | orchestrator | Friday 29 August 2025 15:02:10 +0000 (0:00:00.575) 0:01:15.234 ********* 2025-08-29 15:04:11.722104 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:04:11.722129 | orchestrator | 2025-08-29 15:04:11.722134 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-08-29 15:04:11.722142 | orchestrator | Friday 29 August 2025 15:02:30 +0000 (0:00:19.357) 0:01:34.592 ********* 2025-08-29 15:04:11.722147 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:04:11.722152 | orchestrator | 2025-08-29 15:04:11.722157 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-08-29 15:04:11.722162 | orchestrator | Friday 29 August 2025 15:02:51 +0000 (0:00:21.579) 0:01:56.172 ********* 2025-08-29 15:04:11.722166 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:04:11.722172 | orchestrator | 2025-08-29 15:04:11.722176 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-08-29 15:04:11.722181 | orchestrator | 2025-08-29 15:04:11.722185 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-08-29 15:04:11.722190 | orchestrator | Friday 29 August 2025 15:02:54 +0000 (0:00:02.433) 0:01:58.605 ********* 2025-08-29 15:04:11.722194 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:04:11.722199 | orchestrator | 2025-08-29 15:04:11.722204 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-08-29 15:04:11.722208 | orchestrator | Friday 29 August 2025 15:03:13 +0000 (0:00:19.720) 0:02:18.326 ********* 2025-08-29 15:04:11.722213 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:04:11.722217 | orchestrator | 2025-08-29 15:04:11.722222 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-08-29 15:04:11.722226 | orchestrator | Friday 29 August 2025 15:03:35 +0000 (0:00:21.570) 0:02:39.896 ********* 2025-08-29 15:04:11.722231 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:04:11.722235 | orchestrator | 2025-08-29 15:04:11.722240 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-08-29 15:04:11.722244 | orchestrator | 2025-08-29 15:04:11.722253 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-08-29 15:04:11.722258 | orchestrator | Friday 29 August 2025 15:03:37 +0000 (0:00:02.445) 0:02:42.341 ********* 2025-08-29 15:04:11.722263 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:04:11.722267 | orchestrator | 2025-08-29 15:04:11.722272 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-08-29 15:04:11.722276 | orchestrator | Friday 29 August 2025 15:03:49 +0000 (0:00:11.626) 0:02:53.967 ********* 2025-08-29 15:04:11.722281 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:04:11.722286 | orchestrator | 2025-08-29 15:04:11.722290 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-08-29 15:04:11.722295 | orchestrator | Friday 29 August 2025 15:03:55 +0000 (0:00:05.719) 0:02:59.687 ********* 2025-08-29 15:04:11.722299 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:04:11.722304 | orchestrator | 2025-08-29 15:04:11.722309 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-08-29 15:04:11.722313 | orchestrator | 2025-08-29 15:04:11.722318 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-08-29 15:04:11.722322 | orchestrator | Friday 29 August 2025 15:03:58 +0000 (0:00:02.763) 0:03:02.451 ********* 2025-08-29 15:04:11.722327 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:04:11.722331 | orchestrator | 2025-08-29 15:04:11.722336 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-08-29 15:04:11.722340 | orchestrator | Friday 29 August 2025 15:03:58 +0000 (0:00:00.566) 0:03:03.018 ********* 2025-08-29 15:04:11.722345 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:04:11.722350 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:04:11.722354 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:04:11.722359 | orchestrator | 2025-08-29 15:04:11.722364 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-08-29 15:04:11.722368 | orchestrator | Friday 29 August 2025 15:04:00 +0000 (0:00:02.270) 0:03:05.289 ********* 2025-08-29 15:04:11.722373 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:04:11.722377 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:04:11.722385 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:04:11.722390 | orchestrator | 2025-08-29 15:04:11.722394 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-08-29 15:04:11.722399 | orchestrator | Friday 29 August 2025 15:04:03 +0000 (0:00:02.183) 0:03:07.472 ********* 2025-08-29 15:04:11.722404 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:04:11.722408 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:04:11.722413 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:04:11.722417 | orchestrator | 2025-08-29 15:04:11.722422 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-08-29 15:04:11.722426 | orchestrator | Friday 29 August 2025 15:04:05 +0000 (0:00:02.136) 0:03:09.608 ********* 2025-08-29 15:04:11.722431 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:04:11.722436 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:04:11.722440 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:04:11.722445 | orchestrator | 2025-08-29 15:04:11.722449 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-08-29 15:04:11.722454 | orchestrator | Friday 29 August 2025 15:04:07 +0000 (0:00:02.093) 0:03:11.702 ********* 2025-08-29 15:04:11.722459 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:04:11.722463 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:04:11.722468 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:04:11.722473 | orchestrator | 2025-08-29 15:04:11.722477 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-08-29 15:04:11.722482 | orchestrator | Friday 29 August 2025 15:04:10 +0000 (0:00:02.935) 0:03:14.637 ********* 2025-08-29 15:04:11.722493 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:04:11.722498 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:04:11.722502 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:04:11.722507 | orchestrator | 2025-08-29 15:04:11.722511 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:04:11.722517 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-08-29 15:04:11.722522 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-08-29 15:04:11.722530 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-08-29 15:04:11.722534 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-08-29 15:04:11.722542 | orchestrator | 2025-08-29 15:04:11.722550 | orchestrator | 2025-08-29 15:04:11.722558 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:04:11.722565 | orchestrator | Friday 29 August 2025 15:04:10 +0000 (0:00:00.458) 0:03:15.095 ********* 2025-08-29 15:04:11.722573 | orchestrator | =============================================================================== 2025-08-29 15:04:11.722597 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 43.15s 2025-08-29 15:04:11.722606 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 39.08s 2025-08-29 15:04:11.722620 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 11.63s 2025-08-29 15:04:11.722628 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.86s 2025-08-29 15:04:11.722635 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.55s 2025-08-29 15:04:11.722643 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.99s 2025-08-29 15:04:11.722656 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 5.72s 2025-08-29 15:04:11.722664 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.88s 2025-08-29 15:04:11.722672 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.33s 2025-08-29 15:04:11.722684 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.60s 2025-08-29 15:04:11.722689 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.53s 2025-08-29 15:04:11.722693 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.50s 2025-08-29 15:04:11.722698 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.11s 2025-08-29 15:04:11.722703 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.06s 2025-08-29 15:04:11.722707 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.00s 2025-08-29 15:04:11.722712 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.94s 2025-08-29 15:04:11.722716 | orchestrator | Check MariaDB service --------------------------------------------------- 2.91s 2025-08-29 15:04:11.722721 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.76s 2025-08-29 15:04:11.722725 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.51s 2025-08-29 15:04:11.722730 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.27s 2025-08-29 15:04:11.722734 | orchestrator | 2025-08-29 15:04:11 | INFO  | Task 2ec287c2-cdb8-481f-80f8-8cd56f96c3e8 is in state SUCCESS 2025-08-29 15:04:11.722739 | orchestrator | 2025-08-29 15:04:11 | INFO  | Task 218cee0b-e303-4cbd-b2ed-3429b878ccf8 is in state STARTED 2025-08-29 15:04:11.722744 | orchestrator | 2025-08-29 15:04:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:14.773650 | orchestrator | 2025-08-29 15:04:14 | INFO  | Task ac3846fa-919e-42ba-b144-656c916ba223 is in state STARTED 2025-08-29 15:04:14.775557 | orchestrator | 2025-08-29 15:04:14 | INFO  | Task 79f67e1e-6898-4629-81bc-6e8170ff6de6 is in state STARTED 2025-08-29 15:04:14.778547 | orchestrator | 2025-08-29 15:04:14 | INFO  | Task 218cee0b-e303-4cbd-b2ed-3429b878ccf8 is in state STARTED 2025-08-29 15:04:14.778997 | orchestrator | 2025-08-29 15:04:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:17.811311 | orchestrator | 2025-08-29 15:04:17 | INFO  | Task ac3846fa-919e-42ba-b144-656c916ba223 is in state STARTED 2025-08-29 15:04:17.812941 | orchestrator | 2025-08-29 15:04:17 | INFO  | Task 79f67e1e-6898-4629-81bc-6e8170ff6de6 is in state STARTED 2025-08-29 15:04:17.814208 | orchestrator | 2025-08-29 15:04:17 | INFO  | Task 218cee0b-e303-4cbd-b2ed-3429b878ccf8 is in state STARTED 2025-08-29 15:04:17.816003 | orchestrator | 2025-08-29 15:04:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:20.849463 | orchestrator | 2025-08-29 15:04:20 | INFO  | Task ac3846fa-919e-42ba-b144-656c916ba223 is in state STARTED 2025-08-29 15:04:20.851050 | orchestrator | 2025-08-29 15:04:20 | INFO  | Task 79f67e1e-6898-4629-81bc-6e8170ff6de6 is in state STARTED 2025-08-29 15:04:20.853955 | orchestrator | 2025-08-29 15:04:20 | INFO  | Task 218cee0b-e303-4cbd-b2ed-3429b878ccf8 is in state STARTED 2025-08-29 15:04:20.853992 | orchestrator | 2025-08-29 15:04:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:23.903772 | orchestrator | 2025-08-29 15:04:23 | INFO  | Task ac3846fa-919e-42ba-b144-656c916ba223 is in state STARTED 2025-08-29 15:04:23.904383 | orchestrator | 2025-08-29 15:04:23 | INFO  | Task 79f67e1e-6898-4629-81bc-6e8170ff6de6 is in state STARTED 2025-08-29 15:04:23.905879 | orchestrator | 2025-08-29 15:04:23 | INFO  | Task 218cee0b-e303-4cbd-b2ed-3429b878ccf8 is in state STARTED 2025-08-29 15:04:23.905929 | orchestrator | 2025-08-29 15:04:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:26.945221 | orchestrator | 2025-08-29 15:04:26 | INFO  | Task ac3846fa-919e-42ba-b144-656c916ba223 is in state STARTED 2025-08-29 15:04:26.946373 | orchestrator | 2025-08-29 15:04:26 | INFO  | Task 79f67e1e-6898-4629-81bc-6e8170ff6de6 is in state STARTED 2025-08-29 15:04:26.947642 | orchestrator | 2025-08-29 15:04:26 | INFO  | Task 218cee0b-e303-4cbd-b2ed-3429b878ccf8 is in state STARTED 2025-08-29 15:04:26.947692 | orchestrator | 2025-08-29 15:04:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:29.987031 | orchestrator | 2025-08-29 15:04:29 | INFO  | Task ac3846fa-919e-42ba-b144-656c916ba223 is in state STARTED 2025-08-29 15:04:29.987271 | orchestrator | 2025-08-29 15:04:29 | INFO  | Task 79f67e1e-6898-4629-81bc-6e8170ff6de6 is in state STARTED 2025-08-29 15:04:29.988254 | orchestrator | 2025-08-29 15:04:29 | INFO  | Task 218cee0b-e303-4cbd-b2ed-3429b878ccf8 is in state STARTED 2025-08-29 15:04:29.988284 | orchestrator | 2025-08-29 15:04:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:33.028810 | orchestrator | 2025-08-29 15:04:33 | INFO  | Task ac3846fa-919e-42ba-b144-656c916ba223 is in state STARTED 2025-08-29 15:04:33.028903 | orchestrator | 2025-08-29 15:04:33 | INFO  | Task 79f67e1e-6898-4629-81bc-6e8170ff6de6 is in state STARTED 2025-08-29 15:04:33.028912 | orchestrator | 2025-08-29 15:04:33 | INFO  | Task 218cee0b-e303-4cbd-b2ed-3429b878ccf8 is in state STARTED 2025-08-29 15:04:33.028919 | orchestrator | 2025-08-29 15:04:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:36.060919 | orchestrator | 2025-08-29 15:04:36 | INFO  | Task ac3846fa-919e-42ba-b144-656c916ba223 is in state STARTED 2025-08-29 15:04:36.061865 | orchestrator | 2025-08-29 15:04:36 | INFO  | Task 79f67e1e-6898-4629-81bc-6e8170ff6de6 is in state STARTED 2025-08-29 15:04:36.063713 | orchestrator | 2025-08-29 15:04:36 | INFO  | Task 218cee0b-e303-4cbd-b2ed-3429b878ccf8 is in state STARTED 2025-08-29 15:04:36.063773 | orchestrator | 2025-08-29 15:04:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:39.091507 | orchestrator | 2025-08-29 15:04:39 | INFO  | Task ac3846fa-919e-42ba-b144-656c916ba223 is in state STARTED 2025-08-29 15:04:39.091711 | orchestrator | 2025-08-29 15:04:39 | INFO  | Task 79f67e1e-6898-4629-81bc-6e8170ff6de6 is in state STARTED 2025-08-29 15:04:39.093843 | orchestrator | 2025-08-29 15:04:39 | INFO  | Task 218cee0b-e303-4cbd-b2ed-3429b878ccf8 is in state STARTED 2025-08-29 15:04:39.093884 | orchestrator | 2025-08-29 15:04:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:42.127028 | orchestrator | 2025-08-29 15:04:42 | INFO  | Task ac3846fa-919e-42ba-b144-656c916ba223 is in state STARTED 2025-08-29 15:04:42.127409 | orchestrator | 2025-08-29 15:04:42 | INFO  | Task 79f67e1e-6898-4629-81bc-6e8170ff6de6 is in state STARTED 2025-08-29 15:04:42.128167 | orchestrator | 2025-08-29 15:04:42 | INFO  | Task 218cee0b-e303-4cbd-b2ed-3429b878ccf8 is in state STARTED 2025-08-29 15:04:42.128296 | orchestrator | 2025-08-29 15:04:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:45.153314 | orchestrator | 2025-08-29 15:04:45 | INFO  | Task ac3846fa-919e-42ba-b144-656c916ba223 is in state STARTED 2025-08-29 15:04:45.154846 | orchestrator | 2025-08-29 15:04:45 | INFO  | Task 79f67e1e-6898-4629-81bc-6e8170ff6de6 is in state STARTED 2025-08-29 15:04:45.157071 | orchestrator | 2025-08-29 15:04:45 | INFO  | Task 218cee0b-e303-4cbd-b2ed-3429b878ccf8 is in state STARTED 2025-08-29 15:04:45.157142 | orchestrator | 2025-08-29 15:04:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:48.193717 | orchestrator | 2025-08-29 15:04:48 | INFO  | Task ac3846fa-919e-42ba-b144-656c916ba223 is in state STARTED 2025-08-29 15:04:48.195149 | orchestrator | 2025-08-29 15:04:48 | INFO  | Task 79f67e1e-6898-4629-81bc-6e8170ff6de6 is in state STARTED 2025-08-29 15:04:48.196659 | orchestrator | 2025-08-29 15:04:48 | INFO  | Task 218cee0b-e303-4cbd-b2ed-3429b878ccf8 is in state STARTED 2025-08-29 15:04:48.196700 | orchestrator | 2025-08-29 15:04:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:51.243909 | orchestrator | 2025-08-29 15:04:51 | INFO  | Task ac3846fa-919e-42ba-b144-656c916ba223 is in state STARTED 2025-08-29 15:04:51.246571 | orchestrator | 2025-08-29 15:04:51 | INFO  | Task 79f67e1e-6898-4629-81bc-6e8170ff6de6 is in state STARTED 2025-08-29 15:04:51.247438 | orchestrator | 2025-08-29 15:04:51 | INFO  | Task 218cee0b-e303-4cbd-b2ed-3429b878ccf8 is in state STARTED 2025-08-29 15:04:51.247498 | orchestrator | 2025-08-29 15:04:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:54.301276 | orchestrator | 2025-08-29 15:04:54 | INFO  | Task ac3846fa-919e-42ba-b144-656c916ba223 is in state STARTED 2025-08-29 15:04:54.303287 | orchestrator | 2025-08-29 15:04:54 | INFO  | Task 79f67e1e-6898-4629-81bc-6e8170ff6de6 is in state STARTED 2025-08-29 15:04:54.305477 | orchestrator | 2025-08-29 15:04:54 | INFO  | Task 218cee0b-e303-4cbd-b2ed-3429b878ccf8 is in state STARTED 2025-08-29 15:04:54.305784 | orchestrator | 2025-08-29 15:04:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:04:57.354583 | orchestrator | 2025-08-29 15:04:57 | INFO  | Task ac3846fa-919e-42ba-b144-656c916ba223 is in state STARTED 2025-08-29 15:04:57.357489 | orchestrator | 2025-08-29 15:04:57 | INFO  | Task 79f67e1e-6898-4629-81bc-6e8170ff6de6 is in state STARTED 2025-08-29 15:04:57.359401 | orchestrator | 2025-08-29 15:04:57 | INFO  | Task 218cee0b-e303-4cbd-b2ed-3429b878ccf8 is in state STARTED 2025-08-29 15:04:57.359487 | orchestrator | 2025-08-29 15:04:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:00.412884 | orchestrator | 2025-08-29 15:05:00 | INFO  | Task ac3846fa-919e-42ba-b144-656c916ba223 is in state STARTED 2025-08-29 15:05:00.414801 | orchestrator | 2025-08-29 15:05:00 | INFO  | Task 79f67e1e-6898-4629-81bc-6e8170ff6de6 is in state STARTED 2025-08-29 15:05:00.416179 | orchestrator | 2025-08-29 15:05:00 | INFO  | Task 218cee0b-e303-4cbd-b2ed-3429b878ccf8 is in state STARTED 2025-08-29 15:05:00.416215 | orchestrator | 2025-08-29 15:05:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:03.455713 | orchestrator | 2025-08-29 15:05:03 | INFO  | Task ac3846fa-919e-42ba-b144-656c916ba223 is in state STARTED 2025-08-29 15:05:03.456283 | orchestrator | 2025-08-29 15:05:03 | INFO  | Task 79f67e1e-6898-4629-81bc-6e8170ff6de6 is in state STARTED 2025-08-29 15:05:03.457249 | orchestrator | 2025-08-29 15:05:03 | INFO  | Task 218cee0b-e303-4cbd-b2ed-3429b878ccf8 is in state STARTED 2025-08-29 15:05:03.457268 | orchestrator | 2025-08-29 15:05:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:06.510508 | orchestrator | 2025-08-29 15:05:06 | INFO  | Task ac3846fa-919e-42ba-b144-656c916ba223 is in state STARTED 2025-08-29 15:05:06.512391 | orchestrator | 2025-08-29 15:05:06 | INFO  | Task 79f67e1e-6898-4629-81bc-6e8170ff6de6 is in state STARTED 2025-08-29 15:05:06.514198 | orchestrator | 2025-08-29 15:05:06 | INFO  | Task 218cee0b-e303-4cbd-b2ed-3429b878ccf8 is in state STARTED 2025-08-29 15:05:06.514232 | orchestrator | 2025-08-29 15:05:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:09.559200 | orchestrator | 2025-08-29 15:05:09 | INFO  | Task ac3846fa-919e-42ba-b144-656c916ba223 is in state STARTED 2025-08-29 15:05:09.560843 | orchestrator | 2025-08-29 15:05:09 | INFO  | Task 79f67e1e-6898-4629-81bc-6e8170ff6de6 is in state STARTED 2025-08-29 15:05:09.562300 | orchestrator | 2025-08-29 15:05:09 | INFO  | Task 218cee0b-e303-4cbd-b2ed-3429b878ccf8 is in state STARTED 2025-08-29 15:05:09.562711 | orchestrator | 2025-08-29 15:05:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:12.596755 | orchestrator | 2025-08-29 15:05:12 | INFO  | Task ac3846fa-919e-42ba-b144-656c916ba223 is in state STARTED 2025-08-29 15:05:12.598342 | orchestrator | 2025-08-29 15:05:12 | INFO  | Task aa7633e9-1bc9-43a9-8346-f24e508acab3 is in state STARTED 2025-08-29 15:05:12.600092 | orchestrator | 2025-08-29 15:05:12 | INFO  | Task 79f67e1e-6898-4629-81bc-6e8170ff6de6 is in state STARTED 2025-08-29 15:05:12.603420 | orchestrator | 2025-08-29 15:05:12 | INFO  | Task 218cee0b-e303-4cbd-b2ed-3429b878ccf8 is in state SUCCESS 2025-08-29 15:05:12.604892 | orchestrator | 2025-08-29 15:05:12.604924 | orchestrator | 2025-08-29 15:05:12.604928 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-08-29 15:05:12.604933 | orchestrator | 2025-08-29 15:05:12.604938 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-08-29 15:05:12.604942 | orchestrator | Friday 29 August 2025 15:03:03 +0000 (0:00:00.613) 0:00:00.613 ********* 2025-08-29 15:05:12.604947 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:05:12.604952 | orchestrator | 2025-08-29 15:05:12.604956 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-08-29 15:05:12.604960 | orchestrator | Friday 29 August 2025 15:03:03 +0000 (0:00:00.663) 0:00:01.276 ********* 2025-08-29 15:05:12.604964 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:05:12.604970 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:05:12.604973 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:05:12.604977 | orchestrator | 2025-08-29 15:05:12.604981 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-08-29 15:05:12.604996 | orchestrator | Friday 29 August 2025 15:03:04 +0000 (0:00:00.682) 0:00:01.959 ********* 2025-08-29 15:05:12.605000 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:05:12.605004 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:05:12.605008 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:05:12.605012 | orchestrator | 2025-08-29 15:05:12.605015 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-08-29 15:05:12.605019 | orchestrator | Friday 29 August 2025 15:03:04 +0000 (0:00:00.293) 0:00:02.253 ********* 2025-08-29 15:05:12.605023 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:05:12.605027 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:05:12.605030 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:05:12.605034 | orchestrator | 2025-08-29 15:05:12.605038 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-08-29 15:05:12.605043 | orchestrator | Friday 29 August 2025 15:03:05 +0000 (0:00:00.875) 0:00:03.128 ********* 2025-08-29 15:05:12.605047 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:05:12.605050 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:05:12.605054 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:05:12.605058 | orchestrator | 2025-08-29 15:05:12.605061 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-08-29 15:05:12.605065 | orchestrator | Friday 29 August 2025 15:03:06 +0000 (0:00:00.312) 0:00:03.441 ********* 2025-08-29 15:05:12.605069 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:05:12.605073 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:05:12.605085 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:05:12.605089 | orchestrator | 2025-08-29 15:05:12.605093 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-08-29 15:05:12.605176 | orchestrator | Friday 29 August 2025 15:03:06 +0000 (0:00:00.310) 0:00:03.751 ********* 2025-08-29 15:05:12.605181 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:05:12.605185 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:05:12.605189 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:05:12.605193 | orchestrator | 2025-08-29 15:05:12.605247 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-08-29 15:05:12.605265 | orchestrator | Friday 29 August 2025 15:03:06 +0000 (0:00:00.316) 0:00:04.067 ********* 2025-08-29 15:05:12.605269 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:05:12.605274 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:05:12.605278 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:05:12.605281 | orchestrator | 2025-08-29 15:05:12.605285 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-08-29 15:05:12.605289 | orchestrator | Friday 29 August 2025 15:03:07 +0000 (0:00:00.501) 0:00:04.569 ********* 2025-08-29 15:05:12.605293 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:05:12.605296 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:05:12.605412 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:05:12.605418 | orchestrator | 2025-08-29 15:05:12.605421 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-08-29 15:05:12.605425 | orchestrator | Friday 29 August 2025 15:03:07 +0000 (0:00:00.303) 0:00:04.872 ********* 2025-08-29 15:05:12.605429 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-08-29 15:05:12.605434 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 15:05:12.605438 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 15:05:12.605441 | orchestrator | 2025-08-29 15:05:12.605445 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-08-29 15:05:12.605449 | orchestrator | Friday 29 August 2025 15:03:08 +0000 (0:00:00.674) 0:00:05.547 ********* 2025-08-29 15:05:12.605453 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:05:12.605457 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:05:12.605460 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:05:12.605464 | orchestrator | 2025-08-29 15:05:12.605468 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-08-29 15:05:12.605472 | orchestrator | Friday 29 August 2025 15:03:08 +0000 (0:00:00.471) 0:00:06.018 ********* 2025-08-29 15:05:12.605476 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-08-29 15:05:12.605479 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 15:05:12.605483 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 15:05:12.605487 | orchestrator | 2025-08-29 15:05:12.605491 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-08-29 15:05:12.605495 | orchestrator | Friday 29 August 2025 15:03:10 +0000 (0:00:02.092) 0:00:08.110 ********* 2025-08-29 15:05:12.605499 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-08-29 15:05:12.605503 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-08-29 15:05:12.605507 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-08-29 15:05:12.605510 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:05:12.605514 | orchestrator | 2025-08-29 15:05:12.605557 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-08-29 15:05:12.605569 | orchestrator | Friday 29 August 2025 15:03:11 +0000 (0:00:00.425) 0:00:08.536 ********* 2025-08-29 15:05:12.605575 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-08-29 15:05:12.605582 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-08-29 15:05:12.605591 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-08-29 15:05:12.605601 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:05:12.605605 | orchestrator | 2025-08-29 15:05:12.605609 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-08-29 15:05:12.605612 | orchestrator | Friday 29 August 2025 15:03:11 +0000 (0:00:00.847) 0:00:09.384 ********* 2025-08-29 15:05:12.605618 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-08-29 15:05:12.605625 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-08-29 15:05:12.605629 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-08-29 15:05:12.605633 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:05:12.605637 | orchestrator | 2025-08-29 15:05:12.605641 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-08-29 15:05:12.605645 | orchestrator | Friday 29 August 2025 15:03:12 +0000 (0:00:00.178) 0:00:09.562 ********* 2025-08-29 15:05:12.605651 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'ff7ff686c676', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-08-29 15:03:09.262165', 'end': '2025-08-29 15:03:09.296758', 'delta': '0:00:00.034593', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['ff7ff686c676'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-08-29 15:05:12.605658 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '33d468d1fef8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-08-29 15:03:10.008542', 'end': '2025-08-29 15:03:10.049563', 'delta': '0:00:00.041021', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['33d468d1fef8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-08-29 15:05:12.605668 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '63b6dc1fb7f2', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-08-29 15:03:10.547274', 'end': '2025-08-29 15:03:10.585346', 'delta': '0:00:00.038072', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['63b6dc1fb7f2'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-08-29 15:05:12.605677 | orchestrator | 2025-08-29 15:05:12.605680 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-08-29 15:05:12.605687 | orchestrator | Friday 29 August 2025 15:03:12 +0000 (0:00:00.389) 0:00:09.952 ********* 2025-08-29 15:05:12.605691 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:05:12.605695 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:05:12.605699 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:05:12.605702 | orchestrator | 2025-08-29 15:05:12.605706 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-08-29 15:05:12.605710 | orchestrator | Friday 29 August 2025 15:03:13 +0000 (0:00:00.444) 0:00:10.396 ********* 2025-08-29 15:05:12.605714 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-08-29 15:05:12.605718 | orchestrator | 2025-08-29 15:05:12.605721 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-08-29 15:05:12.605725 | orchestrator | Friday 29 August 2025 15:03:14 +0000 (0:00:01.754) 0:00:12.150 ********* 2025-08-29 15:05:12.605729 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:05:12.605733 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:05:12.605737 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:05:12.605741 | orchestrator | 2025-08-29 15:05:12.605744 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-08-29 15:05:12.605748 | orchestrator | Friday 29 August 2025 15:03:15 +0000 (0:00:00.340) 0:00:12.491 ********* 2025-08-29 15:05:12.605752 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:05:12.605756 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:05:12.605760 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:05:12.605763 | orchestrator | 2025-08-29 15:05:12.605768 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-08-29 15:05:12.605773 | orchestrator | Friday 29 August 2025 15:03:15 +0000 (0:00:00.427) 0:00:12.919 ********* 2025-08-29 15:05:12.605779 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:05:12.605785 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:05:12.605790 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:05:12.605796 | orchestrator | 2025-08-29 15:05:12.605802 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-08-29 15:05:12.605808 | orchestrator | Friday 29 August 2025 15:03:16 +0000 (0:00:00.540) 0:00:13.460 ********* 2025-08-29 15:05:12.605813 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:05:12.605819 | orchestrator | 2025-08-29 15:05:12.605825 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-08-29 15:05:12.605831 | orchestrator | Friday 29 August 2025 15:03:16 +0000 (0:00:00.163) 0:00:13.624 ********* 2025-08-29 15:05:12.605836 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:05:12.605842 | orchestrator | 2025-08-29 15:05:12.605848 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-08-29 15:05:12.605855 | orchestrator | Friday 29 August 2025 15:03:16 +0000 (0:00:00.260) 0:00:13.884 ********* 2025-08-29 15:05:12.605860 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:05:12.605867 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:05:12.605873 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:05:12.605878 | orchestrator | 2025-08-29 15:05:12.605885 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-08-29 15:05:12.605890 | orchestrator | Friday 29 August 2025 15:03:16 +0000 (0:00:00.295) 0:00:14.180 ********* 2025-08-29 15:05:12.605896 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:05:12.605901 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:05:12.606084 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:05:12.606095 | orchestrator | 2025-08-29 15:05:12.606099 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-08-29 15:05:12.606110 | orchestrator | Friday 29 August 2025 15:03:17 +0000 (0:00:00.345) 0:00:14.526 ********* 2025-08-29 15:05:12.606114 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:05:12.606118 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:05:12.606122 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:05:12.606126 | orchestrator | 2025-08-29 15:05:12.606130 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-08-29 15:05:12.606134 | orchestrator | Friday 29 August 2025 15:03:17 +0000 (0:00:00.549) 0:00:15.075 ********* 2025-08-29 15:05:12.606138 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:05:12.606142 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:05:12.606145 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:05:12.606149 | orchestrator | 2025-08-29 15:05:12.606153 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-08-29 15:05:12.606157 | orchestrator | Friday 29 August 2025 15:03:18 +0000 (0:00:00.344) 0:00:15.419 ********* 2025-08-29 15:05:12.606161 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:05:12.606164 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:05:12.606168 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:05:12.606172 | orchestrator | 2025-08-29 15:05:12.606176 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-08-29 15:05:12.606180 | orchestrator | Friday 29 August 2025 15:03:18 +0000 (0:00:00.331) 0:00:15.751 ********* 2025-08-29 15:05:12.606184 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:05:12.606187 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:05:12.606191 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:05:12.606195 | orchestrator | 2025-08-29 15:05:12.606199 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-08-29 15:05:12.606217 | orchestrator | Friday 29 August 2025 15:03:18 +0000 (0:00:00.365) 0:00:16.117 ********* 2025-08-29 15:05:12.606221 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:05:12.606225 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:05:12.606229 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:05:12.606232 | orchestrator | 2025-08-29 15:05:12.606236 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-08-29 15:05:12.606240 | orchestrator | Friday 29 August 2025 15:03:19 +0000 (0:00:00.535) 0:00:16.653 ********* 2025-08-29 15:05:12.606250 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4c2f47a1--6693--5b64--9c97--de0e0041f7f6-osd--block--4c2f47a1--6693--5b64--9c97--de0e0041f7f6', 'dm-uuid-LVM-Bp5IZIwJszEoPKs6GxQSx36pvmgQf6q0IyrE6ewHb9DMU0L0xp7HNmv46iu3XxJl'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 15:05:12.606256 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--218f7b56--b785--5eaf--b35f--b0ddc87960c6-osd--block--218f7b56--b785--5eaf--b35f--b0ddc87960c6', 'dm-uuid-LVM-TCuPDh3Kkt6qr7lxpx96YD8cOAfUV0veRcqeVF90lZRHkrvUQO1xQSZkfh4ATm9z'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 15:05:12.606261 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:05:12.606270 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:05:12.606274 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:05:12.606278 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:05:12.606282 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:05:12.606297 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:05:12.606302 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:05:12.606308 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:05:12.606314 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_12058b5d-7e0f-4769-b570-e8724a20121a', 'scsi-SQEMU_QEMU_HARDDISK_12058b5d-7e0f-4769-b570-e8724a20121a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_12058b5d-7e0f-4769-b570-e8724a20121a-part1', 'scsi-SQEMU_QEMU_HARDDISK_12058b5d-7e0f-4769-b570-e8724a20121a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_12058b5d-7e0f-4769-b570-e8724a20121a-part14', 'scsi-SQEMU_QEMU_HARDDISK_12058b5d-7e0f-4769-b570-e8724a20121a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_12058b5d-7e0f-4769-b570-e8724a20121a-part15', 'scsi-SQEMU_QEMU_HARDDISK_12058b5d-7e0f-4769-b570-e8724a20121a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_12058b5d-7e0f-4769-b570-e8724a20121a-part16', 'scsi-SQEMU_QEMU_HARDDISK_12058b5d-7e0f-4769-b570-e8724a20121a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:05:12.606323 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--4c2f47a1--6693--5b64--9c97--de0e0041f7f6-osd--block--4c2f47a1--6693--5b64--9c97--de0e0041f7f6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-g6gG1M-IL2V-0Amf-c0c4-cNnW-1fYD-P7ziHe', 'scsi-0QEMU_QEMU_HARDDISK_8e840163-cd15-4bab-ac0d-7731db5a26c7', 'scsi-SQEMU_QEMU_HARDDISK_8e840163-cd15-4bab-ac0d-7731db5a26c7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:05:12.606338 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--218f7b56--b785--5eaf--b35f--b0ddc87960c6-osd--block--218f7b56--b785--5eaf--b35f--b0ddc87960c6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-apmPRc-0tLh-gd7f-MAbU-v5aI-vXhU-6ffmio', 'scsi-0QEMU_QEMU_HARDDISK_b50f501b-7dcc-49bb-af34-bcea70be6a61', 'scsi-SQEMU_QEMU_HARDDISK_b50f501b-7dcc-49bb-af34-bcea70be6a61'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:05:12.606351 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--cd5b7d9a--1dd4--5184--a319--6c247fab2039-osd--block--cd5b7d9a--1dd4--5184--a319--6c247fab2039', 'dm-uuid-LVM-jvaVJ10Fcpsrf1MTBY8qTdZ2Gmf4tvfjrxCUn0JKSDlbhS0WygbP9Vo61jKxYujP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 15:05:12.606356 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_34b7b0aa-9c3f-4af7-b9a4-6261675e7012', 'scsi-SQEMU_QEMU_HARDDISK_34b7b0aa-9c3f-4af7-b9a4-6261675e7012'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:05:12.606366 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--95dc25c6--61fb--51c1--a723--34c7e57ec220-osd--block--95dc25c6--61fb--51c1--a723--34c7e57ec220', 'dm-uuid-LVM-N95ON7yjd24XBIBBInOWMAWyxHtTxspjTYoG7FDOaB2vvWjw1Ow5naJsFLKGQSQe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 15:05:12.606371 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-07-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:05:12.606375 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:05:12.606379 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:05:12.606393 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:05:12.606397 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:05:12.606404 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:05:12.606408 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:05:12.606412 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:05:12.606420 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:05:12.606424 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:05:12.606439 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a59956ee-14fc-4c64-8315-f5435014482a', 'scsi-SQEMU_QEMU_HARDDISK_a59956ee-14fc-4c64-8315-f5435014482a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a59956ee-14fc-4c64-8315-f5435014482a-part1', 'scsi-SQEMU_QEMU_HARDDISK_a59956ee-14fc-4c64-8315-f5435014482a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a59956ee-14fc-4c64-8315-f5435014482a-part14', 'scsi-SQEMU_QEMU_HARDDISK_a59956ee-14fc-4c64-8315-f5435014482a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a59956ee-14fc-4c64-8315-f5435014482a-part15', 'scsi-SQEMU_QEMU_HARDDISK_a59956ee-14fc-4c64-8315-f5435014482a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a59956ee-14fc-4c64-8315-f5435014482a-part16', 'scsi-SQEMU_QEMU_HARDDISK_a59956ee-14fc-4c64-8315-f5435014482a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:05:12.606447 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--cd5b7d9a--1dd4--5184--a319--6c247fab2039-osd--block--cd5b7d9a--1dd4--5184--a319--6c247fab2039'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-XDJnt6-q2Eo-YK5E-585i-i4Kv-BrAS-eFQVNK', 'scsi-0QEMU_QEMU_HARDDISK_fa9350c4-64bc-4afb-b502-f801a6f70a24', 'scsi-SQEMU_QEMU_HARDDISK_fa9350c4-64bc-4afb-b502-f801a6f70a24'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:05:12.606451 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--95dc25c6--61fb--51c1--a723--34c7e57ec220-osd--block--95dc25c6--61fb--51c1--a723--34c7e57ec220'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-gVkfSh-TfT9-3kSw-mC7z-BdmD-ou8j-HrzLNf', 'scsi-0QEMU_QEMU_HARDDISK_ce9b0281-40bf-44ad-b4c6-e5b614e2c1c9', 'scsi-SQEMU_QEMU_HARDDISK_ce9b0281-40bf-44ad-b4c6-e5b614e2c1c9'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:05:12.606459 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4d11aa1-e648-4125-bb7f-b16cf1114c9f', 'scsi-SQEMU_QEMU_HARDDISK_d4d11aa1-e648-4125-bb7f-b16cf1114c9f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:05:12.606463 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-07-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:05:12.606467 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:05:12.606471 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ea955146--254c--5a5a--83ec--c4f4ca16d6a1-osd--block--ea955146--254c--5a5a--83ec--c4f4ca16d6a1', 'dm-uuid-LVM-GA0Ozd01uf4NtDu82eUfKUTgH86R8g26EFOKy90nOoZcEQ0B214t60sPYQkvVMlo'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 15:05:12.606477 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--aeb09036--0b6a--534a--a94a--678fcf7bc5df-osd--block--aeb09036--0b6a--534a--a94a--678fcf7bc5df', 'dm-uuid-LVM-QWmHQARGOg6TrjUoKwNCJiKNVBi3jngnNrA7HneS0AvBK79Eij2Pgug45a7oXxag'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sect2025-08-29 15:05:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:12.606481 | orchestrator | orsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-08-29 15:05:12.606486 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:05:12.606493 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:05:12.606501 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:05:12.606505 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:05:12.606509 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:05:12.606513 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:05:12.606540 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:05:12.606545 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-08-29 15:05:12.606557 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eca37a69-3c0f-4357-9670-f9669d9e69b8', 'scsi-SQEMU_QEMU_HARDDISK_eca37a69-3c0f-4357-9670-f9669d9e69b8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eca37a69-3c0f-4357-9670-f9669d9e69b8-part1', 'scsi-SQEMU_QEMU_HARDDISK_eca37a69-3c0f-4357-9670-f9669d9e69b8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eca37a69-3c0f-4357-9670-f9669d9e69b8-part14', 'scsi-SQEMU_QEMU_HARDDISK_eca37a69-3c0f-4357-9670-f9669d9e69b8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eca37a69-3c0f-4357-9670-f9669d9e69b8-part15', 'scsi-SQEMU_QEMU_HARDDISK_eca37a69-3c0f-4357-9670-f9669d9e69b8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eca37a69-3c0f-4357-9670-f9669d9e69b8-part16', 'scsi-SQEMU_QEMU_HARDDISK_eca37a69-3c0f-4357-9670-f9669d9e69b8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:05:12.606565 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--ea955146--254c--5a5a--83ec--c4f4ca16d6a1-osd--block--ea955146--254c--5a5a--83ec--c4f4ca16d6a1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-XM2r6Z-mfGm-hHfb-jDir-devG-vA6W-Zce22J', 'scsi-0QEMU_QEMU_HARDDISK_00b08f76-6c14-40db-8d96-1843b494176b', 'scsi-SQEMU_QEMU_HARDDISK_00b08f76-6c14-40db-8d96-1843b494176b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:05:12.606570 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--aeb09036--0b6a--534a--a94a--678fcf7bc5df-osd--block--aeb09036--0b6a--534a--a94a--678fcf7bc5df'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-5LeC74-eFEO-dWsq-6JVp-2A0l-KQB8-M37UwZ', 'scsi-0QEMU_QEMU_HARDDISK_54964cbc-4c5d-4365-aa24-d13bcc6e495a', 'scsi-SQEMU_QEMU_HARDDISK_54964cbc-4c5d-4365-aa24-d13bcc6e495a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:05:12.606574 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8d4c2d77-38a8-4e70-8dcf-48e237e577e8', 'scsi-SQEMU_QEMU_HARDDISK_8d4c2d77-38a8-4e70-8dcf-48e237e577e8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:05:12.606582 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-07-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-08-29 15:05:12.606586 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:05:12.606590 | orchestrator | 2025-08-29 15:05:12.606594 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-08-29 15:05:12.606598 | orchestrator | Friday 29 August 2025 15:03:19 +0000 (0:00:00.635) 0:00:17.288 ********* 2025-08-29 15:05:12.606605 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4c2f47a1--6693--5b64--9c97--de0e0041f7f6-osd--block--4c2f47a1--6693--5b64--9c97--de0e0041f7f6', 'dm-uuid-LVM-Bp5IZIwJszEoPKs6GxQSx36pvmgQf6q0IyrE6ewHb9DMU0L0xp7HNmv46iu3XxJl'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:05:12.606614 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--218f7b56--b785--5eaf--b35f--b0ddc87960c6-osd--block--218f7b56--b785--5eaf--b35f--b0ddc87960c6', 'dm-uuid-LVM-TCuPDh3Kkt6qr7lxpx96YD8cOAfUV0veRcqeVF90lZRHkrvUQO1xQSZkfh4ATm9z'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:05:12.606618 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:05:12.606622 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:05:12.606626 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:05:12.606635 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:05:12.606645 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:05:12.606649 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--cd5b7d9a--1dd4--5184--a319--6c247fab2039-osd--block--cd5b7d9a--1dd4--5184--a319--6c247fab2039', 'dm-uuid-LVM-jvaVJ10Fcpsrf1MTBY8qTdZ2Gmf4tvfjrxCUn0JKSDlbhS0WygbP9Vo61jKxYujP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:05:12.606653 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:05:12.606657 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--95dc25c6--61fb--51c1--a723--34c7e57ec220-osd--block--95dc25c6--61fb--51c1--a723--34c7e57ec220', 'dm-uuid-LVM-N95ON7yjd24XBIBBInOWMAWyxHtTxspjTYoG7FDOaB2vvWjw1Ow5naJsFLKGQSQe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:05:12.606661 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:05:12.606670 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:05:12.606681 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:05:12.606686 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_12058b5d-7e0f-4769-b570-e8724a20121a', 'scsi-SQEMU_QEMU_HARDDISK_12058b5d-7e0f-4769-b570-e8724a20121a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_12058b5d-7e0f-4769-b570-e8724a20121a-part1', 'scsi-SQEMU_QEMU_HARDDISK_12058b5d-7e0f-4769-b570-e8724a20121a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_12058b5d-7e0f-4769-b570-e8724a20121a-part14', 'scsi-SQEMU_QEMU_HARDDISK_12058b5d-7e0f-4769-b570-e8724a20121a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_12058b5d-7e0f-4769-b570-e8724a20121a-part15', 'scsi-SQEMU_QEMU_HARDDISK_12058b5d-7e0f-4769-b570-e8724a20121a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_12058b5d-7e0f-4769-b570-e8724a20121a-part16', 'scsi-SQEMU_QEMU_HARDDISK_12058b5d-7e0f-4769-b570-e8724a20121a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:05:12.606690 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:05:12.606698 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--4c2f47a1--6693--5b64--9c97--de0e0041f7f6-osd--block--4c2f47a1--6693--5b64--9c97--de0e0041f7f6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-g6gG1M-IL2V-0Amf-c0c4-cNnW-1fYD-P7ziHe', 'scsi-0QEMU_QEMU_HARDDISK_8e840163-cd15-4bab-ac0d-7731db5a26c7', 'scsi-SQEMU_QEMU_HARDDISK_8e840163-cd15-4bab-ac0d-7731db5a26c7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:05:12.606708 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:05:12.606712 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--218f7b56--b785--5eaf--b35f--b0ddc87960c6-osd--block--218f7b56--b785--5eaf--b35f--b0ddc87960c6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-apmPRc-0tLh-gd7f-MAbU-v5aI-vXhU-6ffmio', 'scsi-0QEMU_QEMU_HARDDISK_b50f501b-7dcc-49bb-af34-bcea70be6a61', 'scsi-SQEMU_QEMU_HARDDISK_b50f501b-7dcc-49bb-af34-bcea70be6a61'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:05:12.606717 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:05:12.606722 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_34b7b0aa-9c3f-4af7-b9a4-6261675e7012', 'scsi-SQEMU_QEMU_HARDDISK_34b7b0aa-9c3f-4af7-b9a4-6261675e7012'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:05:12.606730 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:05:12.606741 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-07-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:05:12.606746 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:05:12.606750 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:05:12.606755 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:05:12.606759 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:05:12.606772 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a59956ee-14fc-4c64-8315-f5435014482a', 'scsi-SQEMU_QEMU_HARDDISK_a59956ee-14fc-4c64-8315-f5435014482a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a59956ee-14fc-4c64-8315-f5435014482a-part1', 'scsi-SQEMU_QEMU_HARDDISK_a59956ee-14fc-4c64-8315-f5435014482a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a59956ee-14fc-4c64-8315-f5435014482a-part14', 'scsi-SQEMU_QEMU_HARDDISK_a59956ee-14fc-4c64-8315-f5435014482a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a59956ee-14fc-4c64-8315-f5435014482a-part15', 'scsi-SQEMU_QEMU_HARDDISK_a59956ee-14fc-4c64-8315-f5435014482a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a59956ee-14fc-4c64-8315-f5435014482a-part16', 'scsi-SQEMU_QEMU_HARDDISK_a59956ee-14fc-4c64-8315-f5435014482a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:05:12.606781 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--cd5b7d9a--1dd4--5184--a319--6c247fab2039-osd--block--cd5b7d9a--1dd4--5184--a319--6c247fab2039'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-XDJnt6-q2Eo-YK5E-585i-i4Kv-BrAS-eFQVNK', 'scsi-0QEMU_QEMU_HARDDISK_fa9350c4-64bc-4afb-b502-f801a6f70a24', 'scsi-SQEMU_QEMU_HARDDISK_fa9350c4-64bc-4afb-b502-f801a6f70a24'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:05:12.606786 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--95dc25c6--61fb--51c1--a723--34c7e57ec220-osd--block--95dc25c6--61fb--51c1--a723--34c7e57ec220'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-gVkfSh-TfT9-3kSw-mC7z-BdmD-ou8j-HrzLNf', 'scsi-0QEMU_QEMU_HARDDISK_ce9b0281-40bf-44ad-b4c6-e5b614e2c1c9', 'scsi-SQEMU_QEMU_HARDDISK_ce9b0281-40bf-44ad-b4c6-e5b614e2c1c9'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:05:12.606791 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4d11aa1-e648-4125-bb7f-b16cf1114c9f', 'scsi-SQEMU_QEMU_HARDDISK_d4d11aa1-e648-4125-bb7f-b16cf1114c9f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:05:12.606799 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-07-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:05:12.606809 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ea955146--254c--5a5a--83ec--c4f4ca16d6a1-osd--block--ea955146--254c--5a5a--83ec--c4f4ca16d6a1', 'dm-uuid-LVM-GA0Ozd01uf4NtDu82eUfKUTgH86R8g26EFOKy90nOoZcEQ0B214t60sPYQkvVMlo'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:05:12.606814 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:05:12.606819 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--aeb09036--0b6a--534a--a94a--678fcf7bc5df-osd--block--aeb09036--0b6a--534a--a94a--678fcf7bc5df', 'dm-uuid-LVM-QWmHQARGOg6TrjUoKwNCJiKNVBi3jngnNrA7HneS0AvBK79Eij2Pgug45a7oXxag'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:05:12.606823 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:05:12.606828 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:05:12.606832 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:05:12.606842 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:05:12.606849 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:05:12.606854 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:05:12.606858 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:05:12.606863 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:05:12.606874 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eca37a69-3c0f-4357-9670-f9669d9e69b8', 'scsi-SQEMU_QEMU_HARDDISK_eca37a69-3c0f-4357-9670-f9669d9e69b8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eca37a69-3c0f-4357-9670-f9669d9e69b8-part1', 'scsi-SQEMU_QEMU_HARDDISK_eca37a69-3c0f-4357-9670-f9669d9e69b8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eca37a69-3c0f-4357-9670-f9669d9e69b8-part14', 'scsi-SQEMU_QEMU_HARDDISK_eca37a69-3c0f-4357-9670-f9669d9e69b8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eca37a69-3c0f-4357-9670-f9669d9e69b8-part15', 'scsi-SQEMU_QEMU_HARDDISK_eca37a69-3c0f-4357-9670-f9669d9e69b8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_eca37a69-3c0f-4357-9670-f9669d9e69b8-part16', 'scsi-SQEMU_QEMU_HARDDISK_eca37a69-3c0f-4357-9670-f9669d9e69b8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:05:12.606882 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--ea955146--254c--5a5a--83ec--c4f4ca16d6a1-osd--block--ea955146--254c--5a5a--83ec--c4f4ca16d6a1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-XM2r6Z-mfGm-hHfb-jDir-devG-vA6W-Zce22J', 'scsi-0QEMU_QEMU_HARDDISK_00b08f76-6c14-40db-8d96-1843b494176b', 'scsi-SQEMU_QEMU_HARDDISK_00b08f76-6c14-40db-8d96-1843b494176b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:05:12.606887 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--aeb09036--0b6a--534a--a94a--678fcf7bc5df-osd--block--aeb09036--0b6a--534a--a94a--678fcf7bc5df'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-5LeC74-eFEO-dWsq-6JVp-2A0l-KQB8-M37UwZ', 'scsi-0QEMU_QEMU_HARDDISK_54964cbc-4c5d-4365-aa24-d13bcc6e495a', 'scsi-SQEMU_QEMU_HARDDISK_54964cbc-4c5d-4365-aa24-d13bcc6e495a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:05:12.606891 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8d4c2d77-38a8-4e70-8dcf-48e237e577e8', 'scsi-SQEMU_QEMU_HARDDISK_8d4c2d77-38a8-4e70-8dcf-48e237e577e8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:05:12.606901 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-08-29-14-07-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-08-29 15:05:12.606906 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:05:12.606910 | orchestrator | 2025-08-29 15:05:12.606914 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-08-29 15:05:12.606919 | orchestrator | Friday 29 August 2025 15:03:20 +0000 (0:00:00.648) 0:00:17.937 ********* 2025-08-29 15:05:12.606923 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:05:12.606928 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:05:12.606932 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:05:12.606936 | orchestrator | 2025-08-29 15:05:12.606943 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-08-29 15:05:12.606947 | orchestrator | Friday 29 August 2025 15:03:21 +0000 (0:00:00.694) 0:00:18.631 ********* 2025-08-29 15:05:12.606952 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:05:12.606956 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:05:12.606960 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:05:12.606964 | orchestrator | 2025-08-29 15:05:12.606969 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-08-29 15:05:12.606973 | orchestrator | Friday 29 August 2025 15:03:21 +0000 (0:00:00.524) 0:00:19.156 ********* 2025-08-29 15:05:12.606978 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:05:12.606982 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:05:12.606986 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:05:12.606990 | orchestrator | 2025-08-29 15:05:12.606994 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-08-29 15:05:12.606999 | orchestrator | Friday 29 August 2025 15:03:22 +0000 (0:00:00.659) 0:00:19.816 ********* 2025-08-29 15:05:12.607003 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:05:12.607007 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:05:12.607011 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:05:12.607016 | orchestrator | 2025-08-29 15:05:12.607020 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-08-29 15:05:12.607024 | orchestrator | Friday 29 August 2025 15:03:22 +0000 (0:00:00.323) 0:00:20.139 ********* 2025-08-29 15:05:12.607028 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:05:12.607033 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:05:12.607037 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:05:12.607041 | orchestrator | 2025-08-29 15:05:12.607045 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-08-29 15:05:12.607050 | orchestrator | Friday 29 August 2025 15:03:23 +0000 (0:00:00.436) 0:00:20.575 ********* 2025-08-29 15:05:12.607054 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:05:12.607058 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:05:12.607063 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:05:12.607067 | orchestrator | 2025-08-29 15:05:12.607071 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-08-29 15:05:12.607076 | orchestrator | Friday 29 August 2025 15:03:23 +0000 (0:00:00.515) 0:00:21.091 ********* 2025-08-29 15:05:12.607085 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-08-29 15:05:12.607090 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-08-29 15:05:12.607094 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-08-29 15:05:12.607099 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-08-29 15:05:12.607103 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-08-29 15:05:12.607108 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-08-29 15:05:12.607113 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-08-29 15:05:12.607117 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-08-29 15:05:12.607122 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-08-29 15:05:12.607126 | orchestrator | 2025-08-29 15:05:12.607130 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-08-29 15:05:12.607135 | orchestrator | Friday 29 August 2025 15:03:24 +0000 (0:00:00.861) 0:00:21.953 ********* 2025-08-29 15:05:12.607139 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-08-29 15:05:12.607144 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-08-29 15:05:12.607148 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-08-29 15:05:12.607153 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:05:12.607157 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-08-29 15:05:12.607161 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-08-29 15:05:12.607165 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-08-29 15:05:12.607169 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:05:12.607172 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-08-29 15:05:12.607176 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-08-29 15:05:12.607180 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-08-29 15:05:12.607184 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:05:12.607188 | orchestrator | 2025-08-29 15:05:12.607191 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-08-29 15:05:12.607195 | orchestrator | Friday 29 August 2025 15:03:24 +0000 (0:00:00.382) 0:00:22.335 ********* 2025-08-29 15:05:12.607199 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:05:12.607203 | orchestrator | 2025-08-29 15:05:12.607207 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-08-29 15:05:12.607211 | orchestrator | Friday 29 August 2025 15:03:25 +0000 (0:00:00.740) 0:00:23.076 ********* 2025-08-29 15:05:12.607217 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:05:12.607221 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:05:12.607225 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:05:12.607228 | orchestrator | 2025-08-29 15:05:12.607232 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-08-29 15:05:12.607236 | orchestrator | Friday 29 August 2025 15:03:26 +0000 (0:00:00.336) 0:00:23.412 ********* 2025-08-29 15:05:12.607240 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:05:12.607244 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:05:12.607247 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:05:12.607251 | orchestrator | 2025-08-29 15:05:12.607255 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-08-29 15:05:12.607259 | orchestrator | Friday 29 August 2025 15:03:26 +0000 (0:00:00.341) 0:00:23.754 ********* 2025-08-29 15:05:12.607263 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:05:12.607266 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:05:12.607270 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:05:12.607274 | orchestrator | 2025-08-29 15:05:12.607278 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-08-29 15:05:12.607297 | orchestrator | Friday 29 August 2025 15:03:26 +0000 (0:00:00.319) 0:00:24.073 ********* 2025-08-29 15:05:12.607301 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:05:12.607305 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:05:12.607309 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:05:12.607313 | orchestrator | 2025-08-29 15:05:12.607316 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-08-29 15:05:12.607320 | orchestrator | Friday 29 August 2025 15:03:27 +0000 (0:00:00.603) 0:00:24.677 ********* 2025-08-29 15:05:12.607324 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 15:05:12.607328 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 15:05:12.607332 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 15:05:12.607335 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:05:12.607339 | orchestrator | 2025-08-29 15:05:12.607343 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-08-29 15:05:12.607347 | orchestrator | Friday 29 August 2025 15:03:27 +0000 (0:00:00.366) 0:00:25.044 ********* 2025-08-29 15:05:12.607351 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 15:05:12.607355 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 15:05:12.607359 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 15:05:12.607363 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:05:12.607367 | orchestrator | 2025-08-29 15:05:12.607370 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-08-29 15:05:12.607374 | orchestrator | Friday 29 August 2025 15:03:28 +0000 (0:00:00.387) 0:00:25.432 ********* 2025-08-29 15:05:12.607378 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-08-29 15:05:12.607382 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-08-29 15:05:12.607385 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-08-29 15:05:12.607389 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:05:12.607393 | orchestrator | 2025-08-29 15:05:12.607397 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-08-29 15:05:12.607401 | orchestrator | Friday 29 August 2025 15:03:28 +0000 (0:00:00.365) 0:00:25.797 ********* 2025-08-29 15:05:12.607405 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:05:12.607408 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:05:12.607412 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:05:12.607416 | orchestrator | 2025-08-29 15:05:12.607420 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-08-29 15:05:12.607424 | orchestrator | Friday 29 August 2025 15:03:28 +0000 (0:00:00.308) 0:00:26.106 ********* 2025-08-29 15:05:12.607428 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-08-29 15:05:12.607432 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-08-29 15:05:12.607435 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-08-29 15:05:12.607439 | orchestrator | 2025-08-29 15:05:12.607443 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-08-29 15:05:12.607447 | orchestrator | Friday 29 August 2025 15:03:29 +0000 (0:00:00.465) 0:00:26.572 ********* 2025-08-29 15:05:12.607451 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-08-29 15:05:12.607455 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 15:05:12.607459 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 15:05:12.607465 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-08-29 15:05:12.607471 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-08-29 15:05:12.607477 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-08-29 15:05:12.607484 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-08-29 15:05:12.607492 | orchestrator | 2025-08-29 15:05:12.607502 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-08-29 15:05:12.607515 | orchestrator | Friday 29 August 2025 15:03:30 +0000 (0:00:00.833) 0:00:27.406 ********* 2025-08-29 15:05:12.607536 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-08-29 15:05:12.607543 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-08-29 15:05:12.607549 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-08-29 15:05:12.607555 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-08-29 15:05:12.607561 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-08-29 15:05:12.607567 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-08-29 15:05:12.607576 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-08-29 15:05:12.607581 | orchestrator | 2025-08-29 15:05:12.607587 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-08-29 15:05:12.607593 | orchestrator | Friday 29 August 2025 15:03:31 +0000 (0:00:01.614) 0:00:29.021 ********* 2025-08-29 15:05:12.607599 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:05:12.607605 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:05:12.607611 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-08-29 15:05:12.607616 | orchestrator | 2025-08-29 15:05:12.607622 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-08-29 15:05:12.607628 | orchestrator | Friday 29 August 2025 15:03:31 +0000 (0:00:00.330) 0:00:29.351 ********* 2025-08-29 15:05:12.607638 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-08-29 15:05:12.607647 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-08-29 15:05:12.607653 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-08-29 15:05:12.607660 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-08-29 15:05:12.607666 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-08-29 15:05:12.607672 | orchestrator | 2025-08-29 15:05:12.607679 | orchestrator | TASK [generate keys] *********************************************************** 2025-08-29 15:05:12.607684 | orchestrator | Friday 29 August 2025 15:04:16 +0000 (0:00:44.431) 0:01:13.783 ********* 2025-08-29 15:05:12.607690 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:05:12.607696 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:05:12.607702 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:05:12.607708 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:05:12.607714 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:05:12.607728 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:05:12.607732 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-08-29 15:05:12.607736 | orchestrator | 2025-08-29 15:05:12.607740 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-08-29 15:05:12.607744 | orchestrator | Friday 29 August 2025 15:04:40 +0000 (0:00:23.677) 0:01:37.460 ********* 2025-08-29 15:05:12.607748 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:05:12.607751 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:05:12.607755 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:05:12.607759 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:05:12.607763 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:05:12.607766 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:05:12.607770 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-08-29 15:05:12.607774 | orchestrator | 2025-08-29 15:05:12.607777 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-08-29 15:05:12.607781 | orchestrator | Friday 29 August 2025 15:04:52 +0000 (0:00:12.616) 0:01:50.077 ********* 2025-08-29 15:05:12.607785 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:05:12.607789 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-08-29 15:05:12.607792 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-08-29 15:05:12.607796 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:05:12.607800 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-08-29 15:05:12.607807 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-08-29 15:05:12.607811 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:05:12.607815 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-08-29 15:05:12.607818 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-08-29 15:05:12.607822 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:05:12.607827 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-08-29 15:05:12.607833 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-08-29 15:05:12.607839 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:05:12.607848 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-08-29 15:05:12.607858 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-08-29 15:05:12.607867 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-08-29 15:05:12.607872 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-08-29 15:05:12.607878 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-08-29 15:05:12.607885 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-08-29 15:05:12.607891 | orchestrator | 2025-08-29 15:05:12.607896 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:05:12.607902 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-08-29 15:05:12.607910 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-08-29 15:05:12.607922 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-08-29 15:05:12.607928 | orchestrator | 2025-08-29 15:05:12.607934 | orchestrator | 2025-08-29 15:05:12.607940 | orchestrator | 2025-08-29 15:05:12.607946 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:05:12.607951 | orchestrator | Friday 29 August 2025 15:05:10 +0000 (0:00:17.669) 0:02:07.746 ********* 2025-08-29 15:05:12.607956 | orchestrator | =============================================================================== 2025-08-29 15:05:12.607962 | orchestrator | create openstack pool(s) ----------------------------------------------- 44.43s 2025-08-29 15:05:12.607968 | orchestrator | generate keys ---------------------------------------------------------- 23.68s 2025-08-29 15:05:12.607974 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.67s 2025-08-29 15:05:12.607979 | orchestrator | get keys from monitors ------------------------------------------------- 12.62s 2025-08-29 15:05:12.607984 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.09s 2025-08-29 15:05:12.607990 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.75s 2025-08-29 15:05:12.607996 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.61s 2025-08-29 15:05:12.608003 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.88s 2025-08-29 15:05:12.608008 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.86s 2025-08-29 15:05:12.608015 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.85s 2025-08-29 15:05:12.608021 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.83s 2025-08-29 15:05:12.608027 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.74s 2025-08-29 15:05:12.608033 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.69s 2025-08-29 15:05:12.608039 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.68s 2025-08-29 15:05:12.608045 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.67s 2025-08-29 15:05:12.608051 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.66s 2025-08-29 15:05:12.608056 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.66s 2025-08-29 15:05:12.608060 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.65s 2025-08-29 15:05:12.608064 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.64s 2025-08-29 15:05:12.608068 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.60s 2025-08-29 15:05:15.652242 | orchestrator | 2025-08-29 15:05:15 | INFO  | Task ac3846fa-919e-42ba-b144-656c916ba223 is in state STARTED 2025-08-29 15:05:15.653154 | orchestrator | 2025-08-29 15:05:15 | INFO  | Task aa7633e9-1bc9-43a9-8346-f24e508acab3 is in state STARTED 2025-08-29 15:05:15.656366 | orchestrator | 2025-08-29 15:05:15 | INFO  | Task 79f67e1e-6898-4629-81bc-6e8170ff6de6 is in state STARTED 2025-08-29 15:05:15.656441 | orchestrator | 2025-08-29 15:05:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:18.700202 | orchestrator | 2025-08-29 15:05:18 | INFO  | Task ac3846fa-919e-42ba-b144-656c916ba223 is in state STARTED 2025-08-29 15:05:18.708549 | orchestrator | 2025-08-29 15:05:18 | INFO  | Task aa7633e9-1bc9-43a9-8346-f24e508acab3 is in state STARTED 2025-08-29 15:05:18.710494 | orchestrator | 2025-08-29 15:05:18 | INFO  | Task 79f67e1e-6898-4629-81bc-6e8170ff6de6 is in state STARTED 2025-08-29 15:05:18.710592 | orchestrator | 2025-08-29 15:05:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:21.751788 | orchestrator | 2025-08-29 15:05:21 | INFO  | Task ac3846fa-919e-42ba-b144-656c916ba223 is in state STARTED 2025-08-29 15:05:21.752843 | orchestrator | 2025-08-29 15:05:21 | INFO  | Task aa7633e9-1bc9-43a9-8346-f24e508acab3 is in state STARTED 2025-08-29 15:05:21.753740 | orchestrator | 2025-08-29 15:05:21 | INFO  | Task 79f67e1e-6898-4629-81bc-6e8170ff6de6 is in state STARTED 2025-08-29 15:05:21.753996 | orchestrator | 2025-08-29 15:05:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:24.803127 | orchestrator | 2025-08-29 15:05:24 | INFO  | Task ac3846fa-919e-42ba-b144-656c916ba223 is in state STARTED 2025-08-29 15:05:24.805071 | orchestrator | 2025-08-29 15:05:24 | INFO  | Task aa7633e9-1bc9-43a9-8346-f24e508acab3 is in state STARTED 2025-08-29 15:05:24.806500 | orchestrator | 2025-08-29 15:05:24 | INFO  | Task 79f67e1e-6898-4629-81bc-6e8170ff6de6 is in state STARTED 2025-08-29 15:05:24.806576 | orchestrator | 2025-08-29 15:05:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:27.855491 | orchestrator | 2025-08-29 15:05:27 | INFO  | Task ac3846fa-919e-42ba-b144-656c916ba223 is in state STARTED 2025-08-29 15:05:27.856872 | orchestrator | 2025-08-29 15:05:27 | INFO  | Task aa7633e9-1bc9-43a9-8346-f24e508acab3 is in state STARTED 2025-08-29 15:05:27.859987 | orchestrator | 2025-08-29 15:05:27 | INFO  | Task 79f67e1e-6898-4629-81bc-6e8170ff6de6 is in state STARTED 2025-08-29 15:05:27.860034 | orchestrator | 2025-08-29 15:05:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:30.895658 | orchestrator | 2025-08-29 15:05:30 | INFO  | Task ac3846fa-919e-42ba-b144-656c916ba223 is in state STARTED 2025-08-29 15:05:30.897081 | orchestrator | 2025-08-29 15:05:30 | INFO  | Task aa7633e9-1bc9-43a9-8346-f24e508acab3 is in state STARTED 2025-08-29 15:05:30.897876 | orchestrator | 2025-08-29 15:05:30 | INFO  | Task 79f67e1e-6898-4629-81bc-6e8170ff6de6 is in state STARTED 2025-08-29 15:05:30.897919 | orchestrator | 2025-08-29 15:05:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:33.955027 | orchestrator | 2025-08-29 15:05:33 | INFO  | Task ac3846fa-919e-42ba-b144-656c916ba223 is in state STARTED 2025-08-29 15:05:33.957905 | orchestrator | 2025-08-29 15:05:33 | INFO  | Task aa7633e9-1bc9-43a9-8346-f24e508acab3 is in state STARTED 2025-08-29 15:05:33.959293 | orchestrator | 2025-08-29 15:05:33 | INFO  | Task 79f67e1e-6898-4629-81bc-6e8170ff6de6 is in state STARTED 2025-08-29 15:05:33.959357 | orchestrator | 2025-08-29 15:05:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:37.009101 | orchestrator | 2025-08-29 15:05:37 | INFO  | Task ac3846fa-919e-42ba-b144-656c916ba223 is in state STARTED 2025-08-29 15:05:37.010916 | orchestrator | 2025-08-29 15:05:37 | INFO  | Task aa7633e9-1bc9-43a9-8346-f24e508acab3 is in state STARTED 2025-08-29 15:05:37.012343 | orchestrator | 2025-08-29 15:05:37 | INFO  | Task 79f67e1e-6898-4629-81bc-6e8170ff6de6 is in state STARTED 2025-08-29 15:05:37.012381 | orchestrator | 2025-08-29 15:05:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:40.063887 | orchestrator | 2025-08-29 15:05:40 | INFO  | Task ac3846fa-919e-42ba-b144-656c916ba223 is in state STARTED 2025-08-29 15:05:40.064803 | orchestrator | 2025-08-29 15:05:40 | INFO  | Task aa7633e9-1bc9-43a9-8346-f24e508acab3 is in state STARTED 2025-08-29 15:05:40.066243 | orchestrator | 2025-08-29 15:05:40 | INFO  | Task 79f67e1e-6898-4629-81bc-6e8170ff6de6 is in state STARTED 2025-08-29 15:05:40.066285 | orchestrator | 2025-08-29 15:05:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:43.123573 | orchestrator | 2025-08-29 15:05:43 | INFO  | Task bf4cb9d0-0f9a-45df-8893-3733e9ff3661 is in state STARTED 2025-08-29 15:05:43.127877 | orchestrator | 2025-08-29 15:05:43 | INFO  | Task ac3846fa-919e-42ba-b144-656c916ba223 is in state STARTED 2025-08-29 15:05:43.129240 | orchestrator | 2025-08-29 15:05:43 | INFO  | Task aa7633e9-1bc9-43a9-8346-f24e508acab3 is in state SUCCESS 2025-08-29 15:05:43.131713 | orchestrator | 2025-08-29 15:05:43 | INFO  | Task 79f67e1e-6898-4629-81bc-6e8170ff6de6 is in state STARTED 2025-08-29 15:05:43.131803 | orchestrator | 2025-08-29 15:05:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:46.187410 | orchestrator | 2025-08-29 15:05:46 | INFO  | Task bf4cb9d0-0f9a-45df-8893-3733e9ff3661 is in state STARTED 2025-08-29 15:05:46.189770 | orchestrator | 2025-08-29 15:05:46 | INFO  | Task ac3846fa-919e-42ba-b144-656c916ba223 is in state STARTED 2025-08-29 15:05:46.191609 | orchestrator | 2025-08-29 15:05:46 | INFO  | Task 79f67e1e-6898-4629-81bc-6e8170ff6de6 is in state STARTED 2025-08-29 15:05:46.191682 | orchestrator | 2025-08-29 15:05:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:49.239396 | orchestrator | 2025-08-29 15:05:49 | INFO  | Task bf4cb9d0-0f9a-45df-8893-3733e9ff3661 is in state STARTED 2025-08-29 15:05:49.239870 | orchestrator | 2025-08-29 15:05:49 | INFO  | Task ac3846fa-919e-42ba-b144-656c916ba223 is in state STARTED 2025-08-29 15:05:49.240689 | orchestrator | 2025-08-29 15:05:49 | INFO  | Task 79f67e1e-6898-4629-81bc-6e8170ff6de6 is in state STARTED 2025-08-29 15:05:49.240722 | orchestrator | 2025-08-29 15:05:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:52.285768 | orchestrator | 2025-08-29 15:05:52 | INFO  | Task bf4cb9d0-0f9a-45df-8893-3733e9ff3661 is in state STARTED 2025-08-29 15:05:52.286723 | orchestrator | 2025-08-29 15:05:52 | INFO  | Task ac3846fa-919e-42ba-b144-656c916ba223 is in state STARTED 2025-08-29 15:05:52.288536 | orchestrator | 2025-08-29 15:05:52 | INFO  | Task 79f67e1e-6898-4629-81bc-6e8170ff6de6 is in state STARTED 2025-08-29 15:05:52.288589 | orchestrator | 2025-08-29 15:05:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:55.322551 | orchestrator | 2025-08-29 15:05:55 | INFO  | Task bf4cb9d0-0f9a-45df-8893-3733e9ff3661 is in state STARTED 2025-08-29 15:05:55.322669 | orchestrator | 2025-08-29 15:05:55 | INFO  | Task ac3846fa-919e-42ba-b144-656c916ba223 is in state STARTED 2025-08-29 15:05:55.323572 | orchestrator | 2025-08-29 15:05:55 | INFO  | Task 79f67e1e-6898-4629-81bc-6e8170ff6de6 is in state STARTED 2025-08-29 15:05:55.323633 | orchestrator | 2025-08-29 15:05:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:05:58.374404 | orchestrator | 2025-08-29 15:05:58 | INFO  | Task bf4cb9d0-0f9a-45df-8893-3733e9ff3661 is in state STARTED 2025-08-29 15:05:58.375900 | orchestrator | 2025-08-29 15:05:58 | INFO  | Task ac3846fa-919e-42ba-b144-656c916ba223 is in state STARTED 2025-08-29 15:05:58.377847 | orchestrator | 2025-08-29 15:05:58 | INFO  | Task 79f67e1e-6898-4629-81bc-6e8170ff6de6 is in state STARTED 2025-08-29 15:05:58.377945 | orchestrator | 2025-08-29 15:05:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:01.419315 | orchestrator | 2025-08-29 15:06:01 | INFO  | Task bf4cb9d0-0f9a-45df-8893-3733e9ff3661 is in state STARTED 2025-08-29 15:06:01.420368 | orchestrator | 2025-08-29 15:06:01 | INFO  | Task ac3846fa-919e-42ba-b144-656c916ba223 is in state STARTED 2025-08-29 15:06:01.422641 | orchestrator | 2025-08-29 15:06:01 | INFO  | Task 79f67e1e-6898-4629-81bc-6e8170ff6de6 is in state STARTED 2025-08-29 15:06:01.422706 | orchestrator | 2025-08-29 15:06:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:04.471193 | orchestrator | 2025-08-29 15:06:04 | INFO  | Task bf4cb9d0-0f9a-45df-8893-3733e9ff3661 is in state STARTED 2025-08-29 15:06:04.472743 | orchestrator | 2025-08-29 15:06:04 | INFO  | Task ac3846fa-919e-42ba-b144-656c916ba223 is in state SUCCESS 2025-08-29 15:06:04.474555 | orchestrator | 2025-08-29 15:06:04.474616 | orchestrator | 2025-08-29 15:06:04.474625 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-08-29 15:06:04.474633 | orchestrator | 2025-08-29 15:06:04.474640 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-08-29 15:06:04.474647 | orchestrator | Friday 29 August 2025 15:05:14 +0000 (0:00:00.159) 0:00:00.159 ********* 2025-08-29 15:06:04.474653 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-08-29 15:06:04.474662 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-08-29 15:06:04.474668 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-08-29 15:06:04.474674 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-08-29 15:06:04.474680 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-08-29 15:06:04.474761 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-08-29 15:06:04.474950 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-08-29 15:06:04.474959 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-08-29 15:06:04.474965 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-08-29 15:06:04.474972 | orchestrator | 2025-08-29 15:06:04.474978 | orchestrator | TASK [Create share directory] ************************************************** 2025-08-29 15:06:04.474984 | orchestrator | Friday 29 August 2025 15:05:19 +0000 (0:00:04.315) 0:00:04.474 ********* 2025-08-29 15:06:04.474992 | orchestrator | changed: [testbed-manager -> localhost] 2025-08-29 15:06:04.474998 | orchestrator | 2025-08-29 15:06:04.475006 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-08-29 15:06:04.475014 | orchestrator | Friday 29 August 2025 15:05:20 +0000 (0:00:01.042) 0:00:05.516 ********* 2025-08-29 15:06:04.475022 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-08-29 15:06:04.475030 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-08-29 15:06:04.475037 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-08-29 15:06:04.475060 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-08-29 15:06:04.475068 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-08-29 15:06:04.475074 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-08-29 15:06:04.475081 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-08-29 15:06:04.475087 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-08-29 15:06:04.475094 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-08-29 15:06:04.475100 | orchestrator | 2025-08-29 15:06:04.475106 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-08-29 15:06:04.475112 | orchestrator | Friday 29 August 2025 15:05:33 +0000 (0:00:13.566) 0:00:19.083 ********* 2025-08-29 15:06:04.475120 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-08-29 15:06:04.475127 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-08-29 15:06:04.475135 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-08-29 15:06:04.475139 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-08-29 15:06:04.475158 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-08-29 15:06:04.475162 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-08-29 15:06:04.475166 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-08-29 15:06:04.475170 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-08-29 15:06:04.475173 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-08-29 15:06:04.475177 | orchestrator | 2025-08-29 15:06:04.475181 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:06:04.475185 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:06:04.475191 | orchestrator | 2025-08-29 15:06:04.475194 | orchestrator | 2025-08-29 15:06:04.475198 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:06:04.475202 | orchestrator | Friday 29 August 2025 15:05:40 +0000 (0:00:06.948) 0:00:26.032 ********* 2025-08-29 15:06:04.475206 | orchestrator | =============================================================================== 2025-08-29 15:06:04.475209 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.57s 2025-08-29 15:06:04.475213 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.95s 2025-08-29 15:06:04.475217 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.32s 2025-08-29 15:06:04.475221 | orchestrator | Create share directory -------------------------------------------------- 1.04s 2025-08-29 15:06:04.475225 | orchestrator | 2025-08-29 15:06:04.475228 | orchestrator | 2025-08-29 15:06:04.475232 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:06:04.475236 | orchestrator | 2025-08-29 15:06:04.475251 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:06:04.475255 | orchestrator | Friday 29 August 2025 15:04:15 +0000 (0:00:00.318) 0:00:00.318 ********* 2025-08-29 15:06:04.475259 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:06:04.475265 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:06:04.475270 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:06:04.475276 | orchestrator | 2025-08-29 15:06:04.475283 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:06:04.475289 | orchestrator | Friday 29 August 2025 15:04:15 +0000 (0:00:00.309) 0:00:00.628 ********* 2025-08-29 15:06:04.475294 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-08-29 15:06:04.475302 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-08-29 15:06:04.475307 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-08-29 15:06:04.475313 | orchestrator | 2025-08-29 15:06:04.475319 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-08-29 15:06:04.475325 | orchestrator | 2025-08-29 15:06:04.475331 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-08-29 15:06:04.475338 | orchestrator | Friday 29 August 2025 15:04:16 +0000 (0:00:00.463) 0:00:01.092 ********* 2025-08-29 15:06:04.475343 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:06:04.475352 | orchestrator | 2025-08-29 15:06:04.475358 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-08-29 15:06:04.475364 | orchestrator | Friday 29 August 2025 15:04:16 +0000 (0:00:00.518) 0:00:01.610 ********* 2025-08-29 15:06:04.475382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 15:06:04.475413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 15:06:04.475424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 15:06:04.475438 | orchestrator | 2025-08-29 15:06:04.475444 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-08-29 15:06:04.475450 | orchestrator | Friday 29 August 2025 15:04:17 +0000 (0:00:01.126) 0:00:02.737 ********* 2025-08-29 15:06:04.475457 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:06:04.475463 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:06:04.475488 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:06:04.475496 | orchestrator | 2025-08-29 15:06:04.475502 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-08-29 15:06:04.475509 | orchestrator | Friday 29 August 2025 15:04:18 +0000 (0:00:00.364) 0:00:03.101 ********* 2025-08-29 15:06:04.475517 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-08-29 15:06:04.475525 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-08-29 15:06:04.475536 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-08-29 15:06:04.475542 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-08-29 15:06:04.475548 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-08-29 15:06:04.475554 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-08-29 15:06:04.475560 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-08-29 15:06:04.475565 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-08-29 15:06:04.475571 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-08-29 15:06:04.475578 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-08-29 15:06:04.475584 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-08-29 15:06:04.475591 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-08-29 15:06:04.475597 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-08-29 15:06:04.475612 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-08-29 15:06:04.475618 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-08-29 15:06:04.475625 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-08-29 15:06:04.475631 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-08-29 15:06:04.475637 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-08-29 15:06:04.475643 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-08-29 15:06:04.475649 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-08-29 15:06:04.475655 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-08-29 15:06:04.475662 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-08-29 15:06:04.475669 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-08-29 15:06:04.475678 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-08-29 15:06:04.475686 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-08-29 15:06:04.475695 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-08-29 15:06:04.475702 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-08-29 15:06:04.475709 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-08-29 15:06:04.475716 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-08-29 15:06:04.475722 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-08-29 15:06:04.475729 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-08-29 15:06:04.475735 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-08-29 15:06:04.475742 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-08-29 15:06:04.475750 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-08-29 15:06:04.475756 | orchestrator | 2025-08-29 15:06:04.475763 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 15:06:04.475771 | orchestrator | Friday 29 August 2025 15:04:18 +0000 (0:00:00.695) 0:00:03.797 ********* 2025-08-29 15:06:04.475776 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:06:04.475782 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:06:04.475790 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:06:04.475796 | orchestrator | 2025-08-29 15:06:04.475802 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 15:06:04.475809 | orchestrator | Friday 29 August 2025 15:04:19 +0000 (0:00:00.265) 0:00:04.063 ********* 2025-08-29 15:06:04.475815 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:06:04.475822 | orchestrator | 2025-08-29 15:06:04.475828 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 15:06:04.475847 | orchestrator | Friday 29 August 2025 15:04:19 +0000 (0:00:00.111) 0:00:04.174 ********* 2025-08-29 15:06:04.475854 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:06:04.475861 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:06:04.475867 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:06:04.475874 | orchestrator | 2025-08-29 15:06:04.475880 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 15:06:04.475886 | orchestrator | Friday 29 August 2025 15:04:19 +0000 (0:00:00.368) 0:00:04.543 ********* 2025-08-29 15:06:04.475892 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:06:04.475898 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:06:04.475905 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:06:04.475910 | orchestrator | 2025-08-29 15:06:04.475917 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 15:06:04.475923 | orchestrator | Friday 29 August 2025 15:04:19 +0000 (0:00:00.299) 0:00:04.842 ********* 2025-08-29 15:06:04.475929 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:06:04.475935 | orchestrator | 2025-08-29 15:06:04.475941 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 15:06:04.475948 | orchestrator | Friday 29 August 2025 15:04:19 +0000 (0:00:00.141) 0:00:04.984 ********* 2025-08-29 15:06:04.475954 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:06:04.475959 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:06:04.475965 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:06:04.475971 | orchestrator | 2025-08-29 15:06:04.475977 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 15:06:04.475986 | orchestrator | Friday 29 August 2025 15:04:20 +0000 (0:00:00.310) 0:00:05.295 ********* 2025-08-29 15:06:04.475993 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:06:04.476000 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:06:04.476007 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:06:04.476013 | orchestrator | 2025-08-29 15:06:04.476019 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 15:06:04.476025 | orchestrator | Friday 29 August 2025 15:04:20 +0000 (0:00:00.369) 0:00:05.664 ********* 2025-08-29 15:06:04.476032 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:06:04.476037 | orchestrator | 2025-08-29 15:06:04.476044 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 15:06:04.476050 | orchestrator | Friday 29 August 2025 15:04:20 +0000 (0:00:00.144) 0:00:05.809 ********* 2025-08-29 15:06:04.476056 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:06:04.476063 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:06:04.476068 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:06:04.476075 | orchestrator | 2025-08-29 15:06:04.476081 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 15:06:04.476088 | orchestrator | Friday 29 August 2025 15:04:21 +0000 (0:00:00.501) 0:00:06.311 ********* 2025-08-29 15:06:04.476093 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:06:04.476106 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:06:04.476113 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:06:04.476119 | orchestrator | 2025-08-29 15:06:04.476126 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 15:06:04.476133 | orchestrator | Friday 29 August 2025 15:04:21 +0000 (0:00:00.320) 0:00:06.631 ********* 2025-08-29 15:06:04.476137 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:06:04.476141 | orchestrator | 2025-08-29 15:06:04.476144 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 15:06:04.476148 | orchestrator | Friday 29 August 2025 15:04:21 +0000 (0:00:00.143) 0:00:06.774 ********* 2025-08-29 15:06:04.476152 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:06:04.476156 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:06:04.476159 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:06:04.476163 | orchestrator | 2025-08-29 15:06:04.476167 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 15:06:04.476177 | orchestrator | Friday 29 August 2025 15:04:22 +0000 (0:00:00.317) 0:00:07.092 ********* 2025-08-29 15:06:04.476181 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:06:04.476184 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:06:04.476188 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:06:04.476192 | orchestrator | 2025-08-29 15:06:04.476210 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 15:06:04.476216 | orchestrator | Friday 29 August 2025 15:04:22 +0000 (0:00:00.315) 0:00:07.408 ********* 2025-08-29 15:06:04.476222 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:06:04.476228 | orchestrator | 2025-08-29 15:06:04.476234 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 15:06:04.476240 | orchestrator | Friday 29 August 2025 15:04:22 +0000 (0:00:00.332) 0:00:07.740 ********* 2025-08-29 15:06:04.476245 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:06:04.476251 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:06:04.476258 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:06:04.476263 | orchestrator | 2025-08-29 15:06:04.476269 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 15:06:04.476275 | orchestrator | Friday 29 August 2025 15:04:23 +0000 (0:00:00.288) 0:00:08.029 ********* 2025-08-29 15:06:04.476282 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:06:04.476289 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:06:04.476296 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:06:04.476302 | orchestrator | 2025-08-29 15:06:04.476308 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 15:06:04.476315 | orchestrator | Friday 29 August 2025 15:04:23 +0000 (0:00:00.316) 0:00:08.346 ********* 2025-08-29 15:06:04.476321 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:06:04.476328 | orchestrator | 2025-08-29 15:06:04.476334 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 15:06:04.476340 | orchestrator | Friday 29 August 2025 15:04:23 +0000 (0:00:00.139) 0:00:08.485 ********* 2025-08-29 15:06:04.476346 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:06:04.476352 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:06:04.476359 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:06:04.476365 | orchestrator | 2025-08-29 15:06:04.476371 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 15:06:04.476377 | orchestrator | Friday 29 August 2025 15:04:23 +0000 (0:00:00.310) 0:00:08.796 ********* 2025-08-29 15:06:04.476384 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:06:04.476390 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:06:04.476397 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:06:04.476404 | orchestrator | 2025-08-29 15:06:04.476418 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 15:06:04.476425 | orchestrator | Friday 29 August 2025 15:04:24 +0000 (0:00:00.595) 0:00:09.391 ********* 2025-08-29 15:06:04.476430 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:06:04.476434 | orchestrator | 2025-08-29 15:06:04.476438 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 15:06:04.476442 | orchestrator | Friday 29 August 2025 15:04:24 +0000 (0:00:00.131) 0:00:09.523 ********* 2025-08-29 15:06:04.476446 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:06:04.476450 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:06:04.476454 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:06:04.476458 | orchestrator | 2025-08-29 15:06:04.476462 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 15:06:04.476490 | orchestrator | Friday 29 August 2025 15:04:24 +0000 (0:00:00.321) 0:00:09.844 ********* 2025-08-29 15:06:04.476495 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:06:04.476499 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:06:04.476503 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:06:04.476507 | orchestrator | 2025-08-29 15:06:04.476511 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 15:06:04.476515 | orchestrator | Friday 29 August 2025 15:04:25 +0000 (0:00:00.341) 0:00:10.185 ********* 2025-08-29 15:06:04.476526 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:06:04.476531 | orchestrator | 2025-08-29 15:06:04.476538 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 15:06:04.476544 | orchestrator | Friday 29 August 2025 15:04:25 +0000 (0:00:00.147) 0:00:10.332 ********* 2025-08-29 15:06:04.476550 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:06:04.476560 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:06:04.476568 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:06:04.476574 | orchestrator | 2025-08-29 15:06:04.476580 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 15:06:04.476587 | orchestrator | Friday 29 August 2025 15:04:25 +0000 (0:00:00.287) 0:00:10.620 ********* 2025-08-29 15:06:04.476604 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:06:04.476610 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:06:04.476616 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:06:04.476622 | orchestrator | 2025-08-29 15:06:04.476628 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 15:06:04.476634 | orchestrator | Friday 29 August 2025 15:04:26 +0000 (0:00:00.596) 0:00:11.216 ********* 2025-08-29 15:06:04.476640 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:06:04.476646 | orchestrator | 2025-08-29 15:06:04.476652 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 15:06:04.476673 | orchestrator | Friday 29 August 2025 15:04:26 +0000 (0:00:00.146) 0:00:11.363 ********* 2025-08-29 15:06:04.476680 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:06:04.476686 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:06:04.476692 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:06:04.476698 | orchestrator | 2025-08-29 15:06:04.476704 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-08-29 15:06:04.476710 | orchestrator | Friday 29 August 2025 15:04:26 +0000 (0:00:00.311) 0:00:11.674 ********* 2025-08-29 15:06:04.476716 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:06:04.476722 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:06:04.476728 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:06:04.476735 | orchestrator | 2025-08-29 15:06:04.476750 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-08-29 15:06:04.476757 | orchestrator | Friday 29 August 2025 15:04:26 +0000 (0:00:00.347) 0:00:12.022 ********* 2025-08-29 15:06:04.476763 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:06:04.476769 | orchestrator | 2025-08-29 15:06:04.476776 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-08-29 15:06:04.476783 | orchestrator | Friday 29 August 2025 15:04:27 +0000 (0:00:00.125) 0:00:12.148 ********* 2025-08-29 15:06:04.476789 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:06:04.476795 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:06:04.476801 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:06:04.476807 | orchestrator | 2025-08-29 15:06:04.476813 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-08-29 15:06:04.476819 | orchestrator | Friday 29 August 2025 15:04:27 +0000 (0:00:00.513) 0:00:12.661 ********* 2025-08-29 15:06:04.476825 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:06:04.476832 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:06:04.476838 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:06:04.476844 | orchestrator | 2025-08-29 15:06:04.476851 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-08-29 15:06:04.476858 | orchestrator | Friday 29 August 2025 15:04:29 +0000 (0:00:01.649) 0:00:14.310 ********* 2025-08-29 15:06:04.476865 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-08-29 15:06:04.476871 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-08-29 15:06:04.476877 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-08-29 15:06:04.476883 | orchestrator | 2025-08-29 15:06:04.476889 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-08-29 15:06:04.476903 | orchestrator | Friday 29 August 2025 15:04:31 +0000 (0:00:01.801) 0:00:16.112 ********* 2025-08-29 15:06:04.476908 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-08-29 15:06:04.476914 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-08-29 15:06:04.476917 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-08-29 15:06:04.476921 | orchestrator | 2025-08-29 15:06:04.476925 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-08-29 15:06:04.476929 | orchestrator | Friday 29 August 2025 15:04:33 +0000 (0:00:02.227) 0:00:18.339 ********* 2025-08-29 15:06:04.476938 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-08-29 15:06:04.476942 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-08-29 15:06:04.476946 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-08-29 15:06:04.476950 | orchestrator | 2025-08-29 15:06:04.476954 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-08-29 15:06:04.476958 | orchestrator | Friday 29 August 2025 15:04:35 +0000 (0:00:01.940) 0:00:20.280 ********* 2025-08-29 15:06:04.476962 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:06:04.476966 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:06:04.476969 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:06:04.476973 | orchestrator | 2025-08-29 15:06:04.476977 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-08-29 15:06:04.476981 | orchestrator | Friday 29 August 2025 15:04:35 +0000 (0:00:00.276) 0:00:20.557 ********* 2025-08-29 15:06:04.476985 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:06:04.476989 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:06:04.476993 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:06:04.476997 | orchestrator | 2025-08-29 15:06:04.477001 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-08-29 15:06:04.477005 | orchestrator | Friday 29 August 2025 15:04:35 +0000 (0:00:00.275) 0:00:20.832 ********* 2025-08-29 15:06:04.477009 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:06:04.477013 | orchestrator | 2025-08-29 15:06:04.477017 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-08-29 15:06:04.477020 | orchestrator | Friday 29 August 2025 15:04:36 +0000 (0:00:00.540) 0:00:21.373 ********* 2025-08-29 15:06:04.477030 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 15:06:04.477046 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 15:06:04.477054 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 15:06:04.477065 | orchestrator | 2025-08-29 15:06:04.477073 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-08-29 15:06:04.477079 | orchestrator | Friday 29 August 2025 15:04:37 +0000 (0:00:01.495) 0:00:22.869 ********* 2025-08-29 15:06:04.477096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 15:06:04.477103 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:06:04.477110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 15:06:04.477128 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:06:04.477139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 15:06:04.477146 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:06:04.477152 | orchestrator | 2025-08-29 15:06:04.477164 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-08-29 15:06:04.477170 | orchestrator | Friday 29 August 2025 15:04:38 +0000 (0:00:00.596) 0:00:23.465 ********* 2025-08-29 15:06:04.477183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 15:06:04.477190 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:06:04.477201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 15:06:04.477213 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:06:04.477227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-08-29 15:06:04.477232 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:06:04.477236 | orchestrator | 2025-08-29 15:06:04.477240 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-08-29 15:06:04.477243 | orchestrator | Friday 29 August 2025 15:04:39 +0000 (0:00:00.723) 0:00:24.188 ********* 2025-08-29 15:06:04.477254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 15:06:04.477275 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 15:06:04.477286 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-08-29 15:06:04.477301 | orchestrator | 2025-08-29 15:06:04.477308 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-08-29 15:06:04.477314 | orchestrator | Friday 29 August 2025 15:04:40 +0000 (0:00:01.394) 0:00:25.583 ********* 2025-08-29 15:06:04.477321 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:06:04.477326 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:06:04.477332 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:06:04.477352 | orchestrator | 2025-08-29 15:06:04.477357 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-08-29 15:06:04.477361 | orchestrator | Friday 29 August 2025 15:04:40 +0000 (0:00:00.272) 0:00:25.855 ********* 2025-08-29 15:06:04.477365 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:06:04.477369 | orchestrator | 2025-08-29 15:06:04.477373 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-08-29 15:06:04.477376 | orchestrator | Friday 29 August 2025 15:04:41 +0000 (0:00:00.477) 0:00:26.333 ********* 2025-08-29 15:06:04.477380 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:06:04.477384 | orchestrator | 2025-08-29 15:06:04.477392 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-08-29 15:06:04.477396 | orchestrator | Friday 29 August 2025 15:04:43 +0000 (0:00:02.179) 0:00:28.512 ********* 2025-08-29 15:06:04.477400 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:06:04.477404 | orchestrator | 2025-08-29 15:06:04.477408 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-08-29 15:06:04.477412 | orchestrator | Friday 29 August 2025 15:04:45 +0000 (0:00:02.504) 0:00:31.016 ********* 2025-08-29 15:06:04.477415 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:06:04.477419 | orchestrator | 2025-08-29 15:06:04.477423 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-08-29 15:06:04.477427 | orchestrator | Friday 29 August 2025 15:05:01 +0000 (0:00:15.866) 0:00:46.883 ********* 2025-08-29 15:06:04.477431 | orchestrator | 2025-08-29 15:06:04.477435 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-08-29 15:06:04.477439 | orchestrator | Friday 29 August 2025 15:05:01 +0000 (0:00:00.075) 0:00:46.959 ********* 2025-08-29 15:06:04.477443 | orchestrator | 2025-08-29 15:06:04.477446 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-08-29 15:06:04.477450 | orchestrator | Friday 29 August 2025 15:05:02 +0000 (0:00:00.069) 0:00:47.029 ********* 2025-08-29 15:06:04.477454 | orchestrator | 2025-08-29 15:06:04.477458 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-08-29 15:06:04.477508 | orchestrator | Friday 29 August 2025 15:05:02 +0000 (0:00:00.076) 0:00:47.105 ********* 2025-08-29 15:06:04.477514 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:06:04.477518 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:06:04.477522 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:06:04.477525 | orchestrator | 2025-08-29 15:06:04.477529 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:06:04.477534 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-08-29 15:06:04.477540 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-08-29 15:06:04.477544 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-08-29 15:06:04.477548 | orchestrator | 2025-08-29 15:06:04.477551 | orchestrator | 2025-08-29 15:06:04.477555 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:06:04.477563 | orchestrator | Friday 29 August 2025 15:06:01 +0000 (0:00:59.129) 0:01:46.235 ********* 2025-08-29 15:06:04.477567 | orchestrator | =============================================================================== 2025-08-29 15:06:04.477571 | orchestrator | horizon : Restart horizon container ------------------------------------ 59.13s 2025-08-29 15:06:04.477575 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 15.87s 2025-08-29 15:06:04.477579 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.50s 2025-08-29 15:06:04.477583 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.23s 2025-08-29 15:06:04.477586 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.18s 2025-08-29 15:06:04.477590 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.94s 2025-08-29 15:06:04.477594 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.80s 2025-08-29 15:06:04.477598 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.65s 2025-08-29 15:06:04.477602 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.50s 2025-08-29 15:06:04.477606 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.40s 2025-08-29 15:06:04.477609 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.13s 2025-08-29 15:06:04.477613 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.72s 2025-08-29 15:06:04.477617 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.70s 2025-08-29 15:06:04.477621 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.60s 2025-08-29 15:06:04.477625 | orchestrator | horizon : Update policy file name --------------------------------------- 0.60s 2025-08-29 15:06:04.477629 | orchestrator | horizon : Update policy file name --------------------------------------- 0.60s 2025-08-29 15:06:04.477632 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.54s 2025-08-29 15:06:04.477637 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.52s 2025-08-29 15:06:04.477640 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.51s 2025-08-29 15:06:04.477645 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.50s 2025-08-29 15:06:04.477648 | orchestrator | 2025-08-29 15:06:04 | INFO  | Task 79f67e1e-6898-4629-81bc-6e8170ff6de6 is in state STARTED 2025-08-29 15:06:04.477652 | orchestrator | 2025-08-29 15:06:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:07.517394 | orchestrator | 2025-08-29 15:06:07 | INFO  | Task bf4cb9d0-0f9a-45df-8893-3733e9ff3661 is in state STARTED 2025-08-29 15:06:07.520086 | orchestrator | 2025-08-29 15:06:07 | INFO  | Task 79f67e1e-6898-4629-81bc-6e8170ff6de6 is in state STARTED 2025-08-29 15:06:07.520163 | orchestrator | 2025-08-29 15:06:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:10.563633 | orchestrator | 2025-08-29 15:06:10 | INFO  | Task bf4cb9d0-0f9a-45df-8893-3733e9ff3661 is in state STARTED 2025-08-29 15:06:10.565101 | orchestrator | 2025-08-29 15:06:10 | INFO  | Task 79f67e1e-6898-4629-81bc-6e8170ff6de6 is in state STARTED 2025-08-29 15:06:10.566065 | orchestrator | 2025-08-29 15:06:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:13.613687 | orchestrator | 2025-08-29 15:06:13 | INFO  | Task bf4cb9d0-0f9a-45df-8893-3733e9ff3661 is in state STARTED 2025-08-29 15:06:13.617382 | orchestrator | 2025-08-29 15:06:13 | INFO  | Task 79f67e1e-6898-4629-81bc-6e8170ff6de6 is in state STARTED 2025-08-29 15:06:13.618234 | orchestrator | 2025-08-29 15:06:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:16.661039 | orchestrator | 2025-08-29 15:06:16 | INFO  | Task bf4cb9d0-0f9a-45df-8893-3733e9ff3661 is in state STARTED 2025-08-29 15:06:16.661127 | orchestrator | 2025-08-29 15:06:16 | INFO  | Task 79f67e1e-6898-4629-81bc-6e8170ff6de6 is in state STARTED 2025-08-29 15:06:16.661137 | orchestrator | 2025-08-29 15:06:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:19.703816 | orchestrator | 2025-08-29 15:06:19 | INFO  | Task bf4cb9d0-0f9a-45df-8893-3733e9ff3661 is in state STARTED 2025-08-29 15:06:19.705499 | orchestrator | 2025-08-29 15:06:19 | INFO  | Task 79f67e1e-6898-4629-81bc-6e8170ff6de6 is in state STARTED 2025-08-29 15:06:19.705534 | orchestrator | 2025-08-29 15:06:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:22.741729 | orchestrator | 2025-08-29 15:06:22 | INFO  | Task bf4cb9d0-0f9a-45df-8893-3733e9ff3661 is in state STARTED 2025-08-29 15:06:22.744026 | orchestrator | 2025-08-29 15:06:22 | INFO  | Task 79f67e1e-6898-4629-81bc-6e8170ff6de6 is in state STARTED 2025-08-29 15:06:22.744095 | orchestrator | 2025-08-29 15:06:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:25.782946 | orchestrator | 2025-08-29 15:06:25 | INFO  | Task bf4cb9d0-0f9a-45df-8893-3733e9ff3661 is in state STARTED 2025-08-29 15:06:25.785732 | orchestrator | 2025-08-29 15:06:25 | INFO  | Task 79f67e1e-6898-4629-81bc-6e8170ff6de6 is in state STARTED 2025-08-29 15:06:25.785832 | orchestrator | 2025-08-29 15:06:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:28.832257 | orchestrator | 2025-08-29 15:06:28 | INFO  | Task bf4cb9d0-0f9a-45df-8893-3733e9ff3661 is in state STARTED 2025-08-29 15:06:28.834250 | orchestrator | 2025-08-29 15:06:28 | INFO  | Task 79f67e1e-6898-4629-81bc-6e8170ff6de6 is in state STARTED 2025-08-29 15:06:28.835201 | orchestrator | 2025-08-29 15:06:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:31.882869 | orchestrator | 2025-08-29 15:06:31 | INFO  | Task bf4cb9d0-0f9a-45df-8893-3733e9ff3661 is in state STARTED 2025-08-29 15:06:31.884123 | orchestrator | 2025-08-29 15:06:31 | INFO  | Task 79f67e1e-6898-4629-81bc-6e8170ff6de6 is in state STARTED 2025-08-29 15:06:31.884743 | orchestrator | 2025-08-29 15:06:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:34.931384 | orchestrator | 2025-08-29 15:06:34 | INFO  | Task bf4cb9d0-0f9a-45df-8893-3733e9ff3661 is in state STARTED 2025-08-29 15:06:34.932746 | orchestrator | 2025-08-29 15:06:34 | INFO  | Task 79f67e1e-6898-4629-81bc-6e8170ff6de6 is in state STARTED 2025-08-29 15:06:34.932795 | orchestrator | 2025-08-29 15:06:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:37.989212 | orchestrator | 2025-08-29 15:06:37 | INFO  | Task e884080e-df6c-4b35-b545-4a75c3c74e4f is in state STARTED 2025-08-29 15:06:37.992185 | orchestrator | 2025-08-29 15:06:37 | INFO  | Task bf4cb9d0-0f9a-45df-8893-3733e9ff3661 is in state SUCCESS 2025-08-29 15:06:37.994153 | orchestrator | 2025-08-29 15:06:37 | INFO  | Task a6a997c9-55cb-40aa-9c1c-013fb60dfeec is in state STARTED 2025-08-29 15:06:37.995352 | orchestrator | 2025-08-29 15:06:37 | INFO  | Task 79f67e1e-6898-4629-81bc-6e8170ff6de6 is in state STARTED 2025-08-29 15:06:37.996992 | orchestrator | 2025-08-29 15:06:37 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:06:37.997034 | orchestrator | 2025-08-29 15:06:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:41.066831 | orchestrator | 2025-08-29 15:06:41 | INFO  | Task e884080e-df6c-4b35-b545-4a75c3c74e4f is in state STARTED 2025-08-29 15:06:41.067797 | orchestrator | 2025-08-29 15:06:41 | INFO  | Task a6a997c9-55cb-40aa-9c1c-013fb60dfeec is in state STARTED 2025-08-29 15:06:41.069169 | orchestrator | 2025-08-29 15:06:41 | INFO  | Task 79f67e1e-6898-4629-81bc-6e8170ff6de6 is in state STARTED 2025-08-29 15:06:41.070636 | orchestrator | 2025-08-29 15:06:41 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:06:41.070672 | orchestrator | 2025-08-29 15:06:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:44.099082 | orchestrator | 2025-08-29 15:06:44 | INFO  | Task f70e9555-337a-4d13-bb37-b24818d37a45 is in state STARTED 2025-08-29 15:06:44.099140 | orchestrator | 2025-08-29 15:06:44 | INFO  | Task e884080e-df6c-4b35-b545-4a75c3c74e4f is in state STARTED 2025-08-29 15:06:44.099146 | orchestrator | 2025-08-29 15:06:44 | INFO  | Task a6a997c9-55cb-40aa-9c1c-013fb60dfeec is in state SUCCESS 2025-08-29 15:06:44.099708 | orchestrator | 2025-08-29 15:06:44 | INFO  | Task 81783b15-c2d3-4275-8e17-d4318f84d2d3 is in state STARTED 2025-08-29 15:06:44.100344 | orchestrator | 2025-08-29 15:06:44 | INFO  | Task 79f67e1e-6898-4629-81bc-6e8170ff6de6 is in state STARTED 2025-08-29 15:06:44.101337 | orchestrator | 2025-08-29 15:06:44 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:06:44.101366 | orchestrator | 2025-08-29 15:06:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:47.135190 | orchestrator | 2025-08-29 15:06:47 | INFO  | Task f70e9555-337a-4d13-bb37-b24818d37a45 is in state STARTED 2025-08-29 15:06:47.135272 | orchestrator | 2025-08-29 15:06:47 | INFO  | Task e884080e-df6c-4b35-b545-4a75c3c74e4f is in state STARTED 2025-08-29 15:06:47.136218 | orchestrator | 2025-08-29 15:06:47 | INFO  | Task 81783b15-c2d3-4275-8e17-d4318f84d2d3 is in state STARTED 2025-08-29 15:06:47.136783 | orchestrator | 2025-08-29 15:06:47 | INFO  | Task 79f67e1e-6898-4629-81bc-6e8170ff6de6 is in state STARTED 2025-08-29 15:06:47.137597 | orchestrator | 2025-08-29 15:06:47 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:06:47.137625 | orchestrator | 2025-08-29 15:06:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:50.168858 | orchestrator | 2025-08-29 15:06:50 | INFO  | Task f70e9555-337a-4d13-bb37-b24818d37a45 is in state STARTED 2025-08-29 15:06:50.170085 | orchestrator | 2025-08-29 15:06:50 | INFO  | Task e884080e-df6c-4b35-b545-4a75c3c74e4f is in state STARTED 2025-08-29 15:06:50.175420 | orchestrator | 2025-08-29 15:06:50 | INFO  | Task 81783b15-c2d3-4275-8e17-d4318f84d2d3 is in state STARTED 2025-08-29 15:06:50.175549 | orchestrator | 2025-08-29 15:06:50 | INFO  | Task 79f67e1e-6898-4629-81bc-6e8170ff6de6 is in state STARTED 2025-08-29 15:06:50.176302 | orchestrator | 2025-08-29 15:06:50 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:06:50.176366 | orchestrator | 2025-08-29 15:06:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:53.218310 | orchestrator | 2025-08-29 15:06:53 | INFO  | Task f70e9555-337a-4d13-bb37-b24818d37a45 is in state STARTED 2025-08-29 15:06:53.222318 | orchestrator | 2025-08-29 15:06:53 | INFO  | Task e884080e-df6c-4b35-b545-4a75c3c74e4f is in state STARTED 2025-08-29 15:06:53.223378 | orchestrator | 2025-08-29 15:06:53 | INFO  | Task 81783b15-c2d3-4275-8e17-d4318f84d2d3 is in state STARTED 2025-08-29 15:06:53.225106 | orchestrator | 2025-08-29 15:06:53 | INFO  | Task 79f67e1e-6898-4629-81bc-6e8170ff6de6 is in state STARTED 2025-08-29 15:06:53.227100 | orchestrator | 2025-08-29 15:06:53 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:06:53.227138 | orchestrator | 2025-08-29 15:06:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:56.264522 | orchestrator | 2025-08-29 15:06:56 | INFO  | Task f70e9555-337a-4d13-bb37-b24818d37a45 is in state STARTED 2025-08-29 15:06:56.264645 | orchestrator | 2025-08-29 15:06:56 | INFO  | Task e884080e-df6c-4b35-b545-4a75c3c74e4f is in state STARTED 2025-08-29 15:06:56.266347 | orchestrator | 2025-08-29 15:06:56 | INFO  | Task 81783b15-c2d3-4275-8e17-d4318f84d2d3 is in state STARTED 2025-08-29 15:06:56.267145 | orchestrator | 2025-08-29 15:06:56 | INFO  | Task 79f67e1e-6898-4629-81bc-6e8170ff6de6 is in state STARTED 2025-08-29 15:06:56.267904 | orchestrator | 2025-08-29 15:06:56 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:06:56.268058 | orchestrator | 2025-08-29 15:06:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:06:59.337297 | orchestrator | 2025-08-29 15:06:59 | INFO  | Task f70e9555-337a-4d13-bb37-b24818d37a45 is in state STARTED 2025-08-29 15:06:59.337382 | orchestrator | 2025-08-29 15:06:59 | INFO  | Task e884080e-df6c-4b35-b545-4a75c3c74e4f is in state STARTED 2025-08-29 15:06:59.337392 | orchestrator | 2025-08-29 15:06:59 | INFO  | Task 81783b15-c2d3-4275-8e17-d4318f84d2d3 is in state STARTED 2025-08-29 15:06:59.337400 | orchestrator | 2025-08-29 15:06:59 | INFO  | Task 79f67e1e-6898-4629-81bc-6e8170ff6de6 is in state STARTED 2025-08-29 15:06:59.337408 | orchestrator | 2025-08-29 15:06:59 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:06:59.337416 | orchestrator | 2025-08-29 15:06:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:02.353121 | orchestrator | 2025-08-29 15:07:02 | INFO  | Task f70e9555-337a-4d13-bb37-b24818d37a45 is in state STARTED 2025-08-29 15:07:02.355448 | orchestrator | 2025-08-29 15:07:02 | INFO  | Task e884080e-df6c-4b35-b545-4a75c3c74e4f is in state STARTED 2025-08-29 15:07:02.356234 | orchestrator | 2025-08-29 15:07:02 | INFO  | Task 81783b15-c2d3-4275-8e17-d4318f84d2d3 is in state STARTED 2025-08-29 15:07:02.357298 | orchestrator | 2025-08-29 15:07:02 | INFO  | Task 79f67e1e-6898-4629-81bc-6e8170ff6de6 is in state STARTED 2025-08-29 15:07:02.358706 | orchestrator | 2025-08-29 15:07:02 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:07:02.358744 | orchestrator | 2025-08-29 15:07:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:05.902724 | orchestrator | 2025-08-29 15:07:05.902937 | orchestrator | 2025-08-29 15:07:05.902951 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-08-29 15:07:05.902956 | orchestrator | 2025-08-29 15:07:05.902960 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-08-29 15:07:05.902965 | orchestrator | Friday 29 August 2025 15:05:45 +0000 (0:00:00.242) 0:00:00.242 ********* 2025-08-29 15:07:05.902987 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-08-29 15:07:05.902993 | orchestrator | 2025-08-29 15:07:05.902997 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-08-29 15:07:05.903001 | orchestrator | Friday 29 August 2025 15:05:45 +0000 (0:00:00.251) 0:00:00.493 ********* 2025-08-29 15:07:05.903006 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-08-29 15:07:05.903010 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-08-29 15:07:05.903015 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-08-29 15:07:05.903019 | orchestrator | 2025-08-29 15:07:05.903073 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-08-29 15:07:05.903078 | orchestrator | Friday 29 August 2025 15:05:46 +0000 (0:00:01.287) 0:00:01.781 ********* 2025-08-29 15:07:05.903084 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-08-29 15:07:05.903088 | orchestrator | 2025-08-29 15:07:05.903092 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-08-29 15:07:05.903096 | orchestrator | Friday 29 August 2025 15:05:48 +0000 (0:00:01.166) 0:00:02.948 ********* 2025-08-29 15:07:05.903100 | orchestrator | changed: [testbed-manager] 2025-08-29 15:07:05.903104 | orchestrator | 2025-08-29 15:07:05.903108 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-08-29 15:07:05.903112 | orchestrator | Friday 29 August 2025 15:05:49 +0000 (0:00:00.969) 0:00:03.917 ********* 2025-08-29 15:07:05.903116 | orchestrator | changed: [testbed-manager] 2025-08-29 15:07:05.903120 | orchestrator | 2025-08-29 15:07:05.903123 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-08-29 15:07:05.903127 | orchestrator | Friday 29 August 2025 15:05:49 +0000 (0:00:00.824) 0:00:04.742 ********* 2025-08-29 15:07:05.903131 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-08-29 15:07:05.903135 | orchestrator | ok: [testbed-manager] 2025-08-29 15:07:05.903139 | orchestrator | 2025-08-29 15:07:05.903143 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-08-29 15:07:05.903147 | orchestrator | Friday 29 August 2025 15:06:26 +0000 (0:00:36.676) 0:00:41.418 ********* 2025-08-29 15:07:05.903151 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-08-29 15:07:05.903155 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-08-29 15:07:05.903159 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-08-29 15:07:05.903163 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-08-29 15:07:05.903167 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-08-29 15:07:05.903170 | orchestrator | 2025-08-29 15:07:05.903174 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-08-29 15:07:05.903178 | orchestrator | Friday 29 August 2025 15:06:30 +0000 (0:00:03.845) 0:00:45.264 ********* 2025-08-29 15:07:05.903182 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-08-29 15:07:05.903186 | orchestrator | 2025-08-29 15:07:05.903189 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-08-29 15:07:05.903193 | orchestrator | Friday 29 August 2025 15:06:30 +0000 (0:00:00.435) 0:00:45.699 ********* 2025-08-29 15:07:05.903253 | orchestrator | skipping: [testbed-manager] 2025-08-29 15:07:05.903261 | orchestrator | 2025-08-29 15:07:05.903265 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-08-29 15:07:05.903269 | orchestrator | Friday 29 August 2025 15:06:30 +0000 (0:00:00.147) 0:00:45.846 ********* 2025-08-29 15:07:05.903273 | orchestrator | skipping: [testbed-manager] 2025-08-29 15:07:05.903276 | orchestrator | 2025-08-29 15:07:05.903280 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-08-29 15:07:05.903284 | orchestrator | Friday 29 August 2025 15:06:31 +0000 (0:00:00.309) 0:00:46.156 ********* 2025-08-29 15:07:05.903293 | orchestrator | changed: [testbed-manager] 2025-08-29 15:07:05.903297 | orchestrator | 2025-08-29 15:07:05.903301 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-08-29 15:07:05.903305 | orchestrator | Friday 29 August 2025 15:06:32 +0000 (0:00:01.730) 0:00:47.887 ********* 2025-08-29 15:07:05.903308 | orchestrator | changed: [testbed-manager] 2025-08-29 15:07:05.903312 | orchestrator | 2025-08-29 15:07:05.903316 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-08-29 15:07:05.903320 | orchestrator | Friday 29 August 2025 15:06:33 +0000 (0:00:00.841) 0:00:48.728 ********* 2025-08-29 15:07:05.903323 | orchestrator | changed: [testbed-manager] 2025-08-29 15:07:05.903327 | orchestrator | 2025-08-29 15:07:05.903331 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-08-29 15:07:05.903335 | orchestrator | Friday 29 August 2025 15:06:34 +0000 (0:00:00.612) 0:00:49.341 ********* 2025-08-29 15:07:05.903339 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-08-29 15:07:05.903343 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-08-29 15:07:05.903347 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-08-29 15:07:05.903351 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-08-29 15:07:05.903355 | orchestrator | 2025-08-29 15:07:05.903358 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:07:05.903363 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-08-29 15:07:05.903368 | orchestrator | 2025-08-29 15:07:05.903372 | orchestrator | 2025-08-29 15:07:05.903395 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:07:05.903400 | orchestrator | Friday 29 August 2025 15:06:35 +0000 (0:00:01.334) 0:00:50.675 ********* 2025-08-29 15:07:05.903404 | orchestrator | =============================================================================== 2025-08-29 15:07:05.903408 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 36.68s 2025-08-29 15:07:05.903412 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.85s 2025-08-29 15:07:05.903437 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.73s 2025-08-29 15:07:05.903457 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.33s 2025-08-29 15:07:05.903461 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.29s 2025-08-29 15:07:05.903465 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.17s 2025-08-29 15:07:05.903469 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.97s 2025-08-29 15:07:05.903476 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.84s 2025-08-29 15:07:05.903480 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.82s 2025-08-29 15:07:05.903483 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.61s 2025-08-29 15:07:05.903487 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.44s 2025-08-29 15:07:05.903491 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.31s 2025-08-29 15:07:05.903495 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.25s 2025-08-29 15:07:05.903498 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.15s 2025-08-29 15:07:05.903502 | orchestrator | 2025-08-29 15:07:05.903506 | orchestrator | 2025-08-29 15:07:05.903510 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:07:05.903514 | orchestrator | 2025-08-29 15:07:05.903518 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:07:05.903521 | orchestrator | Friday 29 August 2025 15:06:39 +0000 (0:00:00.188) 0:00:00.188 ********* 2025-08-29 15:07:05.903525 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:07:05.903529 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:07:05.903538 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:07:05.903542 | orchestrator | 2025-08-29 15:07:05.903546 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:07:05.903550 | orchestrator | Friday 29 August 2025 15:06:39 +0000 (0:00:00.308) 0:00:00.496 ********* 2025-08-29 15:07:05.903553 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-08-29 15:07:05.903557 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-08-29 15:07:05.903561 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-08-29 15:07:05.903565 | orchestrator | 2025-08-29 15:07:05.903569 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-08-29 15:07:05.903573 | orchestrator | 2025-08-29 15:07:05.903576 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-08-29 15:07:05.903580 | orchestrator | Friday 29 August 2025 15:06:40 +0000 (0:00:00.747) 0:00:01.244 ********* 2025-08-29 15:07:05.903584 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:07:05.903588 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:07:05.903592 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:07:05.903595 | orchestrator | 2025-08-29 15:07:05.903599 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:07:05.903604 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:07:05.903608 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:07:05.903612 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:07:05.903616 | orchestrator | 2025-08-29 15:07:05.903620 | orchestrator | 2025-08-29 15:07:05.903624 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:07:05.903627 | orchestrator | Friday 29 August 2025 15:06:41 +0000 (0:00:00.814) 0:00:02.059 ********* 2025-08-29 15:07:05.903631 | orchestrator | =============================================================================== 2025-08-29 15:07:05.903635 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.82s 2025-08-29 15:07:05.903639 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.75s 2025-08-29 15:07:05.903643 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2025-08-29 15:07:05.903647 | orchestrator | 2025-08-29 15:07:05.903650 | orchestrator | 2025-08-29 15:07:05.903654 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:07:05.903658 | orchestrator | 2025-08-29 15:07:05.903662 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:07:05.903665 | orchestrator | Friday 29 August 2025 15:04:15 +0000 (0:00:00.295) 0:00:00.295 ********* 2025-08-29 15:07:05.903669 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:07:05.903673 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:07:05.903677 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:07:05.903680 | orchestrator | 2025-08-29 15:07:05.903684 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:07:05.903688 | orchestrator | Friday 29 August 2025 15:04:15 +0000 (0:00:00.325) 0:00:00.621 ********* 2025-08-29 15:07:05.903692 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-08-29 15:07:05.903696 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-08-29 15:07:05.903699 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-08-29 15:07:05.903703 | orchestrator | 2025-08-29 15:07:05.903707 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-08-29 15:07:05.903711 | orchestrator | 2025-08-29 15:07:05.903726 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-08-29 15:07:05.903731 | orchestrator | Friday 29 August 2025 15:04:15 +0000 (0:00:00.433) 0:00:01.054 ********* 2025-08-29 15:07:05.903734 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:07:05.903742 | orchestrator | 2025-08-29 15:07:05.903746 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-08-29 15:07:05.903750 | orchestrator | Friday 29 August 2025 15:04:16 +0000 (0:00:00.664) 0:00:01.718 ********* 2025-08-29 15:07:05.903760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:07:05.903767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:07:05.903772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:07:05.903777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 15:07:05.903798 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 15:07:05.903806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 15:07:05.903811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:07:05.903816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:07:05.903820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:07:05.903824 | orchestrator | 2025-08-29 15:07:05.903828 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-08-29 15:07:05.903832 | orchestrator | Friday 29 August 2025 15:04:18 +0000 (0:00:01.739) 0:00:03.458 ********* 2025-08-29 15:07:05.903836 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-08-29 15:07:05.903840 | orchestrator | 2025-08-29 15:07:05.903844 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-08-29 15:07:05.903848 | orchestrator | Friday 29 August 2025 15:04:19 +0000 (0:00:00.771) 0:00:04.229 ********* 2025-08-29 15:07:05.903851 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:07:05.903855 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:07:05.903859 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:07:05.903863 | orchestrator | 2025-08-29 15:07:05.903870 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-08-29 15:07:05.903874 | orchestrator | Friday 29 August 2025 15:04:19 +0000 (0:00:00.438) 0:00:04.668 ********* 2025-08-29 15:07:05.903878 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 15:07:05.903882 | orchestrator | 2025-08-29 15:07:05.903886 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-08-29 15:07:05.903890 | orchestrator | Friday 29 August 2025 15:04:20 +0000 (0:00:00.629) 0:00:05.297 ********* 2025-08-29 15:07:05.903893 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:07:05.903897 | orchestrator | 2025-08-29 15:07:05.903910 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-08-29 15:07:05.903915 | orchestrator | Friday 29 August 2025 15:04:20 +0000 (0:00:00.579) 0:00:05.877 ********* 2025-08-29 15:07:05.903922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:07:05.903926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:07:05.903931 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:07:05.903935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 15:07:05.903946 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 15:07:05.903953 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 15:07:05.903957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:07:05.903962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:07:05.903966 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:07:05.903970 | orchestrator | 2025-08-29 15:07:05.903973 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-08-29 15:07:05.903977 | orchestrator | Friday 29 August 2025 15:04:23 +0000 (0:00:03.201) 0:00:09.078 ********* 2025-08-29 15:07:05.903981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 15:07:05.903994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 15:07:05.904002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 15:07:05.904006 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:07:05.904010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 15:07:05.904014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 15:07:05.904018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 15:07:05.904028 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:07:05.904035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 15:07:05.904042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 15:07:05.904046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 15:07:05.904050 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:07:05.904054 | orchestrator | 2025-08-29 15:07:05.904058 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-08-29 15:07:05.904062 | orchestrator | Friday 29 August 2025 15:04:24 +0000 (0:00:00.797) 0:00:09.876 ********* 2025-08-29 15:07:05.904066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 15:07:05.904073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 15:07:05.904077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 15:07:05.904081 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:07:05.904089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'grou2025-08-29 15:07:05 | INFO  | Task f70e9555-337a-4d13-bb37-b24818d37a45 is in state STARTED 2025-08-29 15:07:05.904094 | orchestrator | 2025-08-29 15:07:05 | INFO  | Task e884080e-df6c-4b35-b545-4a75c3c74e4f is in state STARTED 2025-08-29 15:07:05.904098 | orchestrator | 2025-08-29 15:07:05 | INFO  | Task 81783b15-c2d3-4275-8e17-d4318f84d2d3 is in state STARTED 2025-08-29 15:07:05.904102 | orchestrator | 2025-08-29 15:07:05 | INFO  | Task 79f67e1e-6898-4629-81bc-6e8170ff6de6 is in state SUCCESS 2025-08-29 15:07:05.904106 | orchestrator | 2025-08-29 15:07:05 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:07:05.904110 | orchestrator | 2025-08-29 15:07:05 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:07:05.904116 | orchestrator | 2025-08-29 15:07:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:05.904121 | orchestrator | p': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 15:07:05.904127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 15:07:05.904134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 15:07:05.904138 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:07:05.904142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-08-29 15:07:05.904149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 15:07:05.904156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-08-29 15:07:05.904160 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:07:05.904164 | orchestrator | 2025-08-29 15:07:05.904168 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-08-29 15:07:05.904172 | orchestrator | Friday 29 August 2025 15:04:25 +0000 (0:00:00.835) 0:00:10.711 ********* 2025-08-29 15:07:05.904176 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:07:05.904183 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:07:05.904191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:07:05.904197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 15:07:05.904201 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 15:07:05.904206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 15:07:05.904213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:07:05.904217 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:07:05.904221 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:07:05.904225 | orchestrator | 2025-08-29 15:07:05.904228 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-08-29 15:07:05.904232 | orchestrator | Friday 29 August 2025 15:04:28 +0000 (0:00:03.224) 0:00:13.936 ********* 2025-08-29 15:07:05.904242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:07:05.904246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 15:07:05.904253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:07:05.904257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 15:07:05.904265 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:07:05.904272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 15:07:05.904276 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:07:05.904283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:07:05.904287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:07:05.904291 | orchestrator | 2025-08-29 15:07:05.904295 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-08-29 15:07:05.904299 | orchestrator | Friday 29 August 2025 15:04:34 +0000 (0:00:05.406) 0:00:19.342 ********* 2025-08-29 15:07:05.904303 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:07:05.904306 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:07:05.904310 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:07:05.904314 | orchestrator | 2025-08-29 15:07:05.904318 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-08-29 15:07:05.904322 | orchestrator | Friday 29 August 2025 15:04:35 +0000 (0:00:01.389) 0:00:20.732 ********* 2025-08-29 15:07:05.904326 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:07:05.904330 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:07:05.904333 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:07:05.904337 | orchestrator | 2025-08-29 15:07:05.904341 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-08-29 15:07:05.904345 | orchestrator | Friday 29 August 2025 15:04:36 +0000 (0:00:00.572) 0:00:21.304 ********* 2025-08-29 15:07:05.904349 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:07:05.904352 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:07:05.904356 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:07:05.904360 | orchestrator | 2025-08-29 15:07:05.904364 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-08-29 15:07:05.904368 | orchestrator | Friday 29 August 2025 15:04:36 +0000 (0:00:00.327) 0:00:21.632 ********* 2025-08-29 15:07:05.904371 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:07:05.904375 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:07:05.904379 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:07:05.904383 | orchestrator | 2025-08-29 15:07:05.904387 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-08-29 15:07:05.904391 | orchestrator | Friday 29 August 2025 15:04:36 +0000 (0:00:00.391) 0:00:22.023 ********* 2025-08-29 15:07:05.904401 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:07:05.904408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 15:07:05.904413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:07:05.904462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 15:07:05.904466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:07:05.904474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-08-29 15:07:05.904485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:07:05.904489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:07:05.904493 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:07:05.904497 | orchestrator | 2025-08-29 15:07:05.904501 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-08-29 15:07:05.904505 | orchestrator | Friday 29 August 2025 15:04:39 +0000 (0:00:02.198) 0:00:24.222 ********* 2025-08-29 15:07:05.904509 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:07:05.904513 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:07:05.904517 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:07:05.904521 | orchestrator | 2025-08-29 15:07:05.904524 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-08-29 15:07:05.904528 | orchestrator | Friday 29 August 2025 15:04:39 +0000 (0:00:00.263) 0:00:24.486 ********* 2025-08-29 15:07:05.904532 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-08-29 15:07:05.904536 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-08-29 15:07:05.904540 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-08-29 15:07:05.904544 | orchestrator | 2025-08-29 15:07:05.904548 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-08-29 15:07:05.904551 | orchestrator | Friday 29 August 2025 15:04:41 +0000 (0:00:01.637) 0:00:26.123 ********* 2025-08-29 15:07:05.904555 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 15:07:05.904559 | orchestrator | 2025-08-29 15:07:05.904563 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-08-29 15:07:05.904567 | orchestrator | Friday 29 August 2025 15:04:41 +0000 (0:00:00.826) 0:00:26.949 ********* 2025-08-29 15:07:05.904570 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:07:05.904574 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:07:05.904578 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:07:05.904582 | orchestrator | 2025-08-29 15:07:05.904585 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-08-29 15:07:05.904592 | orchestrator | Friday 29 August 2025 15:04:42 +0000 (0:00:00.626) 0:00:27.575 ********* 2025-08-29 15:07:05.904596 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 15:07:05.904600 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-08-29 15:07:05.904604 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-08-29 15:07:05.904607 | orchestrator | 2025-08-29 15:07:05.904611 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-08-29 15:07:05.904615 | orchestrator | Friday 29 August 2025 15:04:43 +0000 (0:00:00.933) 0:00:28.508 ********* 2025-08-29 15:07:05.904621 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:07:05.904625 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:07:05.904629 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:07:05.904633 | orchestrator | 2025-08-29 15:07:05.904637 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-08-29 15:07:05.904640 | orchestrator | Friday 29 August 2025 15:04:43 +0000 (0:00:00.260) 0:00:28.769 ********* 2025-08-29 15:07:05.904644 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-08-29 15:07:05.904648 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-08-29 15:07:05.904652 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-08-29 15:07:05.904655 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-08-29 15:07:05.904661 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-08-29 15:07:05.904665 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-08-29 15:07:05.904669 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-08-29 15:07:05.904673 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-08-29 15:07:05.904676 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-08-29 15:07:05.904680 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-08-29 15:07:05.904684 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-08-29 15:07:05.904688 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-08-29 15:07:05.904691 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-08-29 15:07:05.904695 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-08-29 15:07:05.904699 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-08-29 15:07:05.904703 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-08-29 15:07:05.904707 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-08-29 15:07:05.904710 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-08-29 15:07:05.904714 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-08-29 15:07:05.904718 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-08-29 15:07:05.904722 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-08-29 15:07:05.904725 | orchestrator | 2025-08-29 15:07:05.904729 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-08-29 15:07:05.904733 | orchestrator | Friday 29 August 2025 15:04:52 +0000 (0:00:09.058) 0:00:37.828 ********* 2025-08-29 15:07:05.904737 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-08-29 15:07:05.904744 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-08-29 15:07:05.904747 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-08-29 15:07:05.904751 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-08-29 15:07:05.904755 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-08-29 15:07:05.904759 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-08-29 15:07:05.904762 | orchestrator | 2025-08-29 15:07:05.904766 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-08-29 15:07:05.904770 | orchestrator | Friday 29 August 2025 15:04:55 +0000 (0:00:02.856) 0:00:40.684 ********* 2025-08-29 15:07:05.904776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:07:05.904784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:07:05.904788 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-08-29 15:07:05.904805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 15:07:05.904809 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 15:07:05.904814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-08-29 15:07:05.904820 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:07:05.904827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:07:05.904831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-08-29 15:07:05.904835 | orchestrator | 2025-08-29 15:07:05.904838 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-08-29 15:07:05.904842 | orchestrator | Friday 29 August 2025 15:04:57 +0000 (0:00:02.346) 0:00:43.031 ********* 2025-08-29 15:07:05.904846 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:07:05.904853 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:07:05.904857 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:07:05.904861 | orchestrator | 2025-08-29 15:07:05.904864 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-08-29 15:07:05.904868 | orchestrator | Friday 29 August 2025 15:04:58 +0000 (0:00:00.266) 0:00:43.298 ********* 2025-08-29 15:07:05.904872 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:07:05.904876 | orchestrator | 2025-08-29 15:07:05.904880 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-08-29 15:07:05.904883 | orchestrator | Friday 29 August 2025 15:05:00 +0000 (0:00:02.266) 0:00:45.565 ********* 2025-08-29 15:07:05.904887 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:07:05.904891 | orchestrator | 2025-08-29 15:07:05.904895 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-08-29 15:07:05.904899 | orchestrator | Friday 29 August 2025 15:05:02 +0000 (0:00:01.962) 0:00:47.527 ********* 2025-08-29 15:07:05.904902 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:07:05.904906 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:07:05.904910 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:07:05.904914 | orchestrator | 2025-08-29 15:07:05.904918 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-08-29 15:07:05.904921 | orchestrator | Friday 29 August 2025 15:05:03 +0000 (0:00:00.862) 0:00:48.389 ********* 2025-08-29 15:07:05.904925 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:07:05.904929 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:07:05.904933 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:07:05.904936 | orchestrator | 2025-08-29 15:07:05.904940 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-08-29 15:07:05.904944 | orchestrator | Friday 29 August 2025 15:05:03 +0000 (0:00:00.687) 0:00:49.077 ********* 2025-08-29 15:07:05.904948 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:07:05.904952 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:07:05.904956 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:07:05.904959 | orchestrator | 2025-08-29 15:07:05.904963 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-08-29 15:07:05.904967 | orchestrator | Friday 29 August 2025 15:05:04 +0000 (0:00:00.482) 0:00:49.559 ********* 2025-08-29 15:07:05.904971 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:07:05.904975 | orchestrator | 2025-08-29 15:07:05.904978 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-08-29 15:07:05.904982 | orchestrator | Friday 29 August 2025 15:05:18 +0000 (0:00:13.745) 0:01:03.304 ********* 2025-08-29 15:07:05.904986 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:07:05.904990 | orchestrator | 2025-08-29 15:07:05.904993 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-08-29 15:07:05.904997 | orchestrator | Friday 29 August 2025 15:05:28 +0000 (0:00:10.045) 0:01:13.350 ********* 2025-08-29 15:07:05.905001 | orchestrator | 2025-08-29 15:07:05.905005 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-08-29 15:07:05.905008 | orchestrator | Friday 29 August 2025 15:05:28 +0000 (0:00:00.069) 0:01:13.419 ********* 2025-08-29 15:07:05.905012 | orchestrator | 2025-08-29 15:07:05.905016 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-08-29 15:07:05.905022 | orchestrator | Friday 29 August 2025 15:05:28 +0000 (0:00:00.068) 0:01:13.488 ********* 2025-08-29 15:07:05.905026 | orchestrator | 2025-08-29 15:07:05.905030 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-08-29 15:07:05.905033 | orchestrator | Friday 29 August 2025 15:05:28 +0000 (0:00:00.071) 0:01:13.560 ********* 2025-08-29 15:07:05.905037 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:07:05.905041 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:07:05.905045 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:07:05.905048 | orchestrator | 2025-08-29 15:07:05.905052 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-08-29 15:07:05.905060 | orchestrator | Friday 29 August 2025 15:05:48 +0000 (0:00:19.752) 0:01:33.313 ********* 2025-08-29 15:07:05.905064 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:07:05.905068 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:07:05.905071 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:07:05.905075 | orchestrator | 2025-08-29 15:07:05.905079 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-08-29 15:07:05.905085 | orchestrator | Friday 29 August 2025 15:05:58 +0000 (0:00:10.119) 0:01:43.432 ********* 2025-08-29 15:07:05.905089 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:07:05.905093 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:07:05.905097 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:07:05.905100 | orchestrator | 2025-08-29 15:07:05.905104 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-08-29 15:07:05.905108 | orchestrator | Friday 29 August 2025 15:06:13 +0000 (0:00:14.765) 0:01:58.198 ********* 2025-08-29 15:07:05.905112 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:07:05.905116 | orchestrator | 2025-08-29 15:07:05.905119 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-08-29 15:07:05.905123 | orchestrator | Friday 29 August 2025 15:06:13 +0000 (0:00:00.761) 0:01:58.960 ********* 2025-08-29 15:07:05.905127 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:07:05.905131 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:07:05.905134 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:07:05.905138 | orchestrator | 2025-08-29 15:07:05.905142 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-08-29 15:07:05.905146 | orchestrator | Friday 29 August 2025 15:06:14 +0000 (0:00:00.887) 0:01:59.847 ********* 2025-08-29 15:07:05.905149 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:07:05.905153 | orchestrator | 2025-08-29 15:07:05.905157 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-08-29 15:07:05.905161 | orchestrator | Friday 29 August 2025 15:06:16 +0000 (0:00:01.668) 0:02:01.516 ********* 2025-08-29 15:07:05.905165 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-08-29 15:07:05.905168 | orchestrator | 2025-08-29 15:07:05.905172 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-08-29 15:07:05.905176 | orchestrator | Friday 29 August 2025 15:06:27 +0000 (0:00:11.235) 0:02:12.751 ********* 2025-08-29 15:07:05.905180 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-08-29 15:07:05.905183 | orchestrator | 2025-08-29 15:07:05.905187 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-08-29 15:07:05.905191 | orchestrator | Friday 29 August 2025 15:06:52 +0000 (0:00:25.249) 0:02:38.000 ********* 2025-08-29 15:07:05.905195 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-08-29 15:07:05.905199 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-08-29 15:07:05.905202 | orchestrator | 2025-08-29 15:07:05.905206 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-08-29 15:07:05.905210 | orchestrator | Friday 29 August 2025 15:06:58 +0000 (0:00:06.045) 0:02:44.045 ********* 2025-08-29 15:07:05.905214 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:07:05.905217 | orchestrator | 2025-08-29 15:07:05.905221 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-08-29 15:07:05.905225 | orchestrator | Friday 29 August 2025 15:06:59 +0000 (0:00:00.114) 0:02:44.160 ********* 2025-08-29 15:07:05.905229 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:07:05.905233 | orchestrator | 2025-08-29 15:07:05.905236 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-08-29 15:07:05.905240 | orchestrator | Friday 29 August 2025 15:06:59 +0000 (0:00:00.158) 0:02:44.319 ********* 2025-08-29 15:07:05.905244 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:07:05.905248 | orchestrator | 2025-08-29 15:07:05.905254 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-08-29 15:07:05.905258 | orchestrator | Friday 29 August 2025 15:06:59 +0000 (0:00:00.174) 0:02:44.493 ********* 2025-08-29 15:07:05.905262 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:07:05.905265 | orchestrator | 2025-08-29 15:07:05.905269 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-08-29 15:07:05.905273 | orchestrator | Friday 29 August 2025 15:06:59 +0000 (0:00:00.590) 0:02:45.087 ********* 2025-08-29 15:07:05.905277 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:07:05.905281 | orchestrator | 2025-08-29 15:07:05.905284 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-08-29 15:07:05.905288 | orchestrator | Friday 29 August 2025 15:07:03 +0000 (0:00:03.244) 0:02:48.331 ********* 2025-08-29 15:07:05.905292 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:07:05.905296 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:07:05.905299 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:07:05.905303 | orchestrator | 2025-08-29 15:07:05.905307 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:07:05.905311 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-08-29 15:07:05.905316 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-08-29 15:07:05.905322 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-08-29 15:07:05.905326 | orchestrator | 2025-08-29 15:07:05.905330 | orchestrator | 2025-08-29 15:07:05.905334 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:07:05.905338 | orchestrator | Friday 29 August 2025 15:07:03 +0000 (0:00:00.454) 0:02:48.786 ********* 2025-08-29 15:07:05.905341 | orchestrator | =============================================================================== 2025-08-29 15:07:05.905345 | orchestrator | service-ks-register : keystone | Creating services --------------------- 25.25s 2025-08-29 15:07:05.905349 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 19.75s 2025-08-29 15:07:05.905352 | orchestrator | keystone : Restart keystone container ---------------------------------- 14.77s 2025-08-29 15:07:05.905356 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 13.75s 2025-08-29 15:07:05.905363 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 11.24s 2025-08-29 15:07:05.905367 | orchestrator | keystone : Restart keystone-fernet container --------------------------- 10.12s 2025-08-29 15:07:05.905371 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 10.05s 2025-08-29 15:07:05.905374 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.06s 2025-08-29 15:07:05.905378 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.05s 2025-08-29 15:07:05.905382 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.41s 2025-08-29 15:07:05.905386 | orchestrator | keystone : Creating default user role ----------------------------------- 3.24s 2025-08-29 15:07:05.905389 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.22s 2025-08-29 15:07:05.905393 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.20s 2025-08-29 15:07:05.905397 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.86s 2025-08-29 15:07:05.905401 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.35s 2025-08-29 15:07:05.905404 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.27s 2025-08-29 15:07:05.905408 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.20s 2025-08-29 15:07:05.905412 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 1.96s 2025-08-29 15:07:05.905455 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.74s 2025-08-29 15:07:05.905459 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.67s 2025-08-29 15:07:08.455649 | orchestrator | 2025-08-29 15:07:08 | INFO  | Task f70e9555-337a-4d13-bb37-b24818d37a45 is in state STARTED 2025-08-29 15:07:08.455822 | orchestrator | 2025-08-29 15:07:08 | INFO  | Task e884080e-df6c-4b35-b545-4a75c3c74e4f is in state STARTED 2025-08-29 15:07:08.457214 | orchestrator | 2025-08-29 15:07:08 | INFO  | Task 81783b15-c2d3-4275-8e17-d4318f84d2d3 is in state STARTED 2025-08-29 15:07:08.458521 | orchestrator | 2025-08-29 15:07:08 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:07:08.459712 | orchestrator | 2025-08-29 15:07:08 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:07:08.460765 | orchestrator | 2025-08-29 15:07:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:11.494493 | orchestrator | 2025-08-29 15:07:11 | INFO  | Task f70e9555-337a-4d13-bb37-b24818d37a45 is in state STARTED 2025-08-29 15:07:11.494630 | orchestrator | 2025-08-29 15:07:11 | INFO  | Task e884080e-df6c-4b35-b545-4a75c3c74e4f is in state STARTED 2025-08-29 15:07:11.494655 | orchestrator | 2025-08-29 15:07:11 | INFO  | Task 81783b15-c2d3-4275-8e17-d4318f84d2d3 is in state STARTED 2025-08-29 15:07:11.494674 | orchestrator | 2025-08-29 15:07:11 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:07:11.507367 | orchestrator | 2025-08-29 15:07:11 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:07:11.507477 | orchestrator | 2025-08-29 15:07:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:14.546235 | orchestrator | 2025-08-29 15:07:14 | INFO  | Task f70e9555-337a-4d13-bb37-b24818d37a45 is in state STARTED 2025-08-29 15:07:14.546739 | orchestrator | 2025-08-29 15:07:14 | INFO  | Task e884080e-df6c-4b35-b545-4a75c3c74e4f is in state STARTED 2025-08-29 15:07:14.547333 | orchestrator | 2025-08-29 15:07:14 | INFO  | Task 81783b15-c2d3-4275-8e17-d4318f84d2d3 is in state STARTED 2025-08-29 15:07:14.548812 | orchestrator | 2025-08-29 15:07:14 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:07:14.549379 | orchestrator | 2025-08-29 15:07:14 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:07:14.549506 | orchestrator | 2025-08-29 15:07:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:17.586489 | orchestrator | 2025-08-29 15:07:17 | INFO  | Task f70e9555-337a-4d13-bb37-b24818d37a45 is in state STARTED 2025-08-29 15:07:17.586603 | orchestrator | 2025-08-29 15:07:17 | INFO  | Task e884080e-df6c-4b35-b545-4a75c3c74e4f is in state STARTED 2025-08-29 15:07:17.587462 | orchestrator | 2025-08-29 15:07:17 | INFO  | Task 81783b15-c2d3-4275-8e17-d4318f84d2d3 is in state STARTED 2025-08-29 15:07:17.588057 | orchestrator | 2025-08-29 15:07:17 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:07:17.588774 | orchestrator | 2025-08-29 15:07:17 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:07:17.588804 | orchestrator | 2025-08-29 15:07:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:20.631620 | orchestrator | 2025-08-29 15:07:20 | INFO  | Task f70e9555-337a-4d13-bb37-b24818d37a45 is in state STARTED 2025-08-29 15:07:20.631710 | orchestrator | 2025-08-29 15:07:20 | INFO  | Task e884080e-df6c-4b35-b545-4a75c3c74e4f is in state STARTED 2025-08-29 15:07:20.631722 | orchestrator | 2025-08-29 15:07:20 | INFO  | Task 81783b15-c2d3-4275-8e17-d4318f84d2d3 is in state STARTED 2025-08-29 15:07:20.631750 | orchestrator | 2025-08-29 15:07:20 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:07:20.631755 | orchestrator | 2025-08-29 15:07:20 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:07:20.631759 | orchestrator | 2025-08-29 15:07:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:23.663146 | orchestrator | 2025-08-29 15:07:23 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:07:23.663326 | orchestrator | 2025-08-29 15:07:23 | INFO  | Task f70e9555-337a-4d13-bb37-b24818d37a45 is in state STARTED 2025-08-29 15:07:23.664081 | orchestrator | 2025-08-29 15:07:23 | INFO  | Task e884080e-df6c-4b35-b545-4a75c3c74e4f is in state STARTED 2025-08-29 15:07:23.664696 | orchestrator | 2025-08-29 15:07:23 | INFO  | Task 81783b15-c2d3-4275-8e17-d4318f84d2d3 is in state SUCCESS 2025-08-29 15:07:23.665868 | orchestrator | 2025-08-29 15:07:23 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:07:23.666200 | orchestrator | 2025-08-29 15:07:23 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:07:23.666229 | orchestrator | 2025-08-29 15:07:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:26.701595 | orchestrator | 2025-08-29 15:07:26 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:07:26.701680 | orchestrator | 2025-08-29 15:07:26 | INFO  | Task f70e9555-337a-4d13-bb37-b24818d37a45 is in state STARTED 2025-08-29 15:07:26.701695 | orchestrator | 2025-08-29 15:07:26 | INFO  | Task e884080e-df6c-4b35-b545-4a75c3c74e4f is in state STARTED 2025-08-29 15:07:26.702735 | orchestrator | 2025-08-29 15:07:26 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:07:26.702920 | orchestrator | 2025-08-29 15:07:26 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:07:26.702950 | orchestrator | 2025-08-29 15:07:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:29.742332 | orchestrator | 2025-08-29 15:07:29 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:07:29.743927 | orchestrator | 2025-08-29 15:07:29 | INFO  | Task f70e9555-337a-4d13-bb37-b24818d37a45 is in state STARTED 2025-08-29 15:07:29.744422 | orchestrator | 2025-08-29 15:07:29 | INFO  | Task e884080e-df6c-4b35-b545-4a75c3c74e4f is in state STARTED 2025-08-29 15:07:29.745603 | orchestrator | 2025-08-29 15:07:29 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:07:29.746597 | orchestrator | 2025-08-29 15:07:29 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:07:29.746665 | orchestrator | 2025-08-29 15:07:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:32.793208 | orchestrator | 2025-08-29 15:07:32 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:07:32.796444 | orchestrator | 2025-08-29 15:07:32 | INFO  | Task f70e9555-337a-4d13-bb37-b24818d37a45 is in state STARTED 2025-08-29 15:07:32.796491 | orchestrator | 2025-08-29 15:07:32 | INFO  | Task e884080e-df6c-4b35-b545-4a75c3c74e4f is in state STARTED 2025-08-29 15:07:32.796497 | orchestrator | 2025-08-29 15:07:32 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:07:32.797136 | orchestrator | 2025-08-29 15:07:32 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:07:32.797152 | orchestrator | 2025-08-29 15:07:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:35.834629 | orchestrator | 2025-08-29 15:07:35 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:07:35.836115 | orchestrator | 2025-08-29 15:07:35 | INFO  | Task f70e9555-337a-4d13-bb37-b24818d37a45 is in state STARTED 2025-08-29 15:07:35.840322 | orchestrator | 2025-08-29 15:07:35 | INFO  | Task e884080e-df6c-4b35-b545-4a75c3c74e4f is in state STARTED 2025-08-29 15:07:35.842114 | orchestrator | 2025-08-29 15:07:35 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:07:35.844123 | orchestrator | 2025-08-29 15:07:35 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:07:35.844432 | orchestrator | 2025-08-29 15:07:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:38.876569 | orchestrator | 2025-08-29 15:07:38 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:07:38.878305 | orchestrator | 2025-08-29 15:07:38 | INFO  | Task f70e9555-337a-4d13-bb37-b24818d37a45 is in state STARTED 2025-08-29 15:07:38.878791 | orchestrator | 2025-08-29 15:07:38 | INFO  | Task e884080e-df6c-4b35-b545-4a75c3c74e4f is in state STARTED 2025-08-29 15:07:38.879743 | orchestrator | 2025-08-29 15:07:38 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:07:38.880758 | orchestrator | 2025-08-29 15:07:38 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:07:38.880906 | orchestrator | 2025-08-29 15:07:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:41.917789 | orchestrator | 2025-08-29 15:07:41 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:07:41.918510 | orchestrator | 2025-08-29 15:07:41 | INFO  | Task f70e9555-337a-4d13-bb37-b24818d37a45 is in state STARTED 2025-08-29 15:07:41.919841 | orchestrator | 2025-08-29 15:07:41 | INFO  | Task e884080e-df6c-4b35-b545-4a75c3c74e4f is in state STARTED 2025-08-29 15:07:41.921820 | orchestrator | 2025-08-29 15:07:41 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:07:41.922666 | orchestrator | 2025-08-29 15:07:41 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:07:41.922730 | orchestrator | 2025-08-29 15:07:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:44.957265 | orchestrator | 2025-08-29 15:07:44 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:07:44.959547 | orchestrator | 2025-08-29 15:07:44 | INFO  | Task f70e9555-337a-4d13-bb37-b24818d37a45 is in state STARTED 2025-08-29 15:07:44.961134 | orchestrator | 2025-08-29 15:07:44 | INFO  | Task e884080e-df6c-4b35-b545-4a75c3c74e4f is in state STARTED 2025-08-29 15:07:44.964321 | orchestrator | 2025-08-29 15:07:44 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:07:44.967311 | orchestrator | 2025-08-29 15:07:44 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:07:44.967758 | orchestrator | 2025-08-29 15:07:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:47.991287 | orchestrator | 2025-08-29 15:07:47 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:07:47.991465 | orchestrator | 2025-08-29 15:07:47 | INFO  | Task f70e9555-337a-4d13-bb37-b24818d37a45 is in state STARTED 2025-08-29 15:07:47.992049 | orchestrator | 2025-08-29 15:07:47 | INFO  | Task e884080e-df6c-4b35-b545-4a75c3c74e4f is in state STARTED 2025-08-29 15:07:47.992733 | orchestrator | 2025-08-29 15:07:47 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:07:47.994664 | orchestrator | 2025-08-29 15:07:47 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:07:47.994771 | orchestrator | 2025-08-29 15:07:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:51.036896 | orchestrator | 2025-08-29 15:07:51 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:07:51.037219 | orchestrator | 2025-08-29 15:07:51 | INFO  | Task f70e9555-337a-4d13-bb37-b24818d37a45 is in state STARTED 2025-08-29 15:07:51.037838 | orchestrator | 2025-08-29 15:07:51 | INFO  | Task e884080e-df6c-4b35-b545-4a75c3c74e4f is in state STARTED 2025-08-29 15:07:51.038687 | orchestrator | 2025-08-29 15:07:51 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:07:51.039311 | orchestrator | 2025-08-29 15:07:51 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:07:51.039332 | orchestrator | 2025-08-29 15:07:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:54.068557 | orchestrator | 2025-08-29 15:07:54 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:07:54.068736 | orchestrator | 2025-08-29 15:07:54 | INFO  | Task f70e9555-337a-4d13-bb37-b24818d37a45 is in state STARTED 2025-08-29 15:07:54.069613 | orchestrator | 2025-08-29 15:07:54 | INFO  | Task e884080e-df6c-4b35-b545-4a75c3c74e4f is in state STARTED 2025-08-29 15:07:54.070268 | orchestrator | 2025-08-29 15:07:54 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:07:54.072262 | orchestrator | 2025-08-29 15:07:54 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:07:54.072321 | orchestrator | 2025-08-29 15:07:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:07:57.102487 | orchestrator | 2025-08-29 15:07:57 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:07:57.104314 | orchestrator | 2025-08-29 15:07:57 | INFO  | Task f70e9555-337a-4d13-bb37-b24818d37a45 is in state STARTED 2025-08-29 15:07:57.105029 | orchestrator | 2025-08-29 15:07:57 | INFO  | Task e884080e-df6c-4b35-b545-4a75c3c74e4f is in state STARTED 2025-08-29 15:07:57.105750 | orchestrator | 2025-08-29 15:07:57 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:07:57.106728 | orchestrator | 2025-08-29 15:07:57 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:07:57.106760 | orchestrator | 2025-08-29 15:07:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:00.191682 | orchestrator | 2025-08-29 15:08:00 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:08:00.191764 | orchestrator | 2025-08-29 15:08:00 | INFO  | Task f70e9555-337a-4d13-bb37-b24818d37a45 is in state STARTED 2025-08-29 15:08:00.191771 | orchestrator | 2025-08-29 15:08:00 | INFO  | Task e884080e-df6c-4b35-b545-4a75c3c74e4f is in state STARTED 2025-08-29 15:08:00.191775 | orchestrator | 2025-08-29 15:08:00 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:08:00.191779 | orchestrator | 2025-08-29 15:08:00 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:08:00.191784 | orchestrator | 2025-08-29 15:08:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:03.194563 | orchestrator | 2025-08-29 15:08:03 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:08:03.195194 | orchestrator | 2025-08-29 15:08:03 | INFO  | Task f70e9555-337a-4d13-bb37-b24818d37a45 is in state STARTED 2025-08-29 15:08:03.196185 | orchestrator | 2025-08-29 15:08:03 | INFO  | Task e884080e-df6c-4b35-b545-4a75c3c74e4f is in state STARTED 2025-08-29 15:08:03.197128 | orchestrator | 2025-08-29 15:08:03 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:08:03.198206 | orchestrator | 2025-08-29 15:08:03 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:08:03.198452 | orchestrator | 2025-08-29 15:08:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:06.227899 | orchestrator | 2025-08-29 15:08:06 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:08:06.228872 | orchestrator | 2025-08-29 15:08:06 | INFO  | Task f70e9555-337a-4d13-bb37-b24818d37a45 is in state STARTED 2025-08-29 15:08:06.230725 | orchestrator | 2025-08-29 15:08:06 | INFO  | Task e884080e-df6c-4b35-b545-4a75c3c74e4f is in state STARTED 2025-08-29 15:08:06.231308 | orchestrator | 2025-08-29 15:08:06 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:08:06.233336 | orchestrator | 2025-08-29 15:08:06 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:08:06.233444 | orchestrator | 2025-08-29 15:08:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:09.258274 | orchestrator | 2025-08-29 15:08:09 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:08:09.258539 | orchestrator | 2025-08-29 15:08:09 | INFO  | Task f70e9555-337a-4d13-bb37-b24818d37a45 is in state STARTED 2025-08-29 15:08:09.258954 | orchestrator | 2025-08-29 15:08:09 | INFO  | Task e884080e-df6c-4b35-b545-4a75c3c74e4f is in state SUCCESS 2025-08-29 15:08:09.259116 | orchestrator | 2025-08-29 15:08:09.259130 | orchestrator | 2025-08-29 15:08:09.259139 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:08:09.259150 | orchestrator | 2025-08-29 15:08:09.259159 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:08:09.259173 | orchestrator | Friday 29 August 2025 15:06:47 +0000 (0:00:00.376) 0:00:00.376 ********* 2025-08-29 15:08:09.259188 | orchestrator | ok: [testbed-manager] 2025-08-29 15:08:09.259199 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:08:09.259210 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:08:09.259219 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:08:09.259229 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:08:09.259239 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:08:09.259249 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:08:09.259258 | orchestrator | 2025-08-29 15:08:09.259269 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:08:09.259279 | orchestrator | Friday 29 August 2025 15:06:48 +0000 (0:00:00.939) 0:00:01.315 ********* 2025-08-29 15:08:09.259290 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-08-29 15:08:09.259318 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-08-29 15:08:09.259329 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-08-29 15:08:09.259340 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-08-29 15:08:09.259351 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-08-29 15:08:09.259361 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-08-29 15:08:09.259372 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-08-29 15:08:09.259379 | orchestrator | 2025-08-29 15:08:09.259408 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-08-29 15:08:09.259415 | orchestrator | 2025-08-29 15:08:09.259421 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-08-29 15:08:09.259428 | orchestrator | Friday 29 August 2025 15:06:49 +0000 (0:00:00.988) 0:00:02.303 ********* 2025-08-29 15:08:09.259435 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:08:09.259463 | orchestrator | 2025-08-29 15:08:09.259470 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-08-29 15:08:09.259476 | orchestrator | Friday 29 August 2025 15:06:51 +0000 (0:00:02.039) 0:00:04.343 ********* 2025-08-29 15:08:09.259483 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2025-08-29 15:08:09.259489 | orchestrator | 2025-08-29 15:08:09.259495 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-08-29 15:08:09.259501 | orchestrator | Friday 29 August 2025 15:06:55 +0000 (0:00:04.083) 0:00:08.426 ********* 2025-08-29 15:08:09.259508 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-08-29 15:08:09.259516 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-08-29 15:08:09.259522 | orchestrator | 2025-08-29 15:08:09.259528 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-08-29 15:08:09.259535 | orchestrator | Friday 29 August 2025 15:07:01 +0000 (0:00:06.079) 0:00:14.506 ********* 2025-08-29 15:08:09.259541 | orchestrator | ok: [testbed-manager] => (item=service) 2025-08-29 15:08:09.259547 | orchestrator | 2025-08-29 15:08:09.259553 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-08-29 15:08:09.259559 | orchestrator | Friday 29 August 2025 15:07:04 +0000 (0:00:03.095) 0:00:17.602 ********* 2025-08-29 15:08:09.259565 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 15:08:09.259572 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2025-08-29 15:08:09.259578 | orchestrator | 2025-08-29 15:08:09.259584 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-08-29 15:08:09.259590 | orchestrator | Friday 29 August 2025 15:07:09 +0000 (0:00:05.137) 0:00:22.739 ********* 2025-08-29 15:08:09.259596 | orchestrator | ok: [testbed-manager] => (item=admin) 2025-08-29 15:08:09.259602 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2025-08-29 15:08:09.259681 | orchestrator | 2025-08-29 15:08:09.259689 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-08-29 15:08:09.259695 | orchestrator | Friday 29 August 2025 15:07:16 +0000 (0:00:06.678) 0:00:29.418 ********* 2025-08-29 15:08:09.259702 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2025-08-29 15:08:09.259708 | orchestrator | 2025-08-29 15:08:09.259714 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:08:09.259721 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:08:09.259728 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:08:09.259734 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:08:09.259740 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:08:09.259747 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:08:09.259763 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:08:09.259770 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:08:09.259777 | orchestrator | 2025-08-29 15:08:09.259783 | orchestrator | 2025-08-29 15:08:09.259789 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:08:09.259796 | orchestrator | Friday 29 August 2025 15:07:21 +0000 (0:00:05.146) 0:00:34.564 ********* 2025-08-29 15:08:09.259809 | orchestrator | =============================================================================== 2025-08-29 15:08:09.259815 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.68s 2025-08-29 15:08:09.259821 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.08s 2025-08-29 15:08:09.259828 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.15s 2025-08-29 15:08:09.259840 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 5.14s 2025-08-29 15:08:09.259846 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 4.08s 2025-08-29 15:08:09.259852 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.10s 2025-08-29 15:08:09.259858 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 2.04s 2025-08-29 15:08:09.259865 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.99s 2025-08-29 15:08:09.259871 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.94s 2025-08-29 15:08:09.259877 | orchestrator | 2025-08-29 15:08:09.259883 | orchestrator | 2025-08-29 15:08:09.259890 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-08-29 15:08:09.259896 | orchestrator | 2025-08-29 15:08:09.259902 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-08-29 15:08:09.259909 | orchestrator | Friday 29 August 2025 15:06:39 +0000 (0:00:00.256) 0:00:00.256 ********* 2025-08-29 15:08:09.259915 | orchestrator | changed: [testbed-manager] 2025-08-29 15:08:09.259921 | orchestrator | 2025-08-29 15:08:09.259928 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-08-29 15:08:09.259934 | orchestrator | Friday 29 August 2025 15:06:41 +0000 (0:00:02.261) 0:00:02.518 ********* 2025-08-29 15:08:09.259940 | orchestrator | changed: [testbed-manager] 2025-08-29 15:08:09.259946 | orchestrator | 2025-08-29 15:08:09.259953 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-08-29 15:08:09.259959 | orchestrator | Friday 29 August 2025 15:06:42 +0000 (0:00:00.904) 0:00:03.422 ********* 2025-08-29 15:08:09.259965 | orchestrator | changed: [testbed-manager] 2025-08-29 15:08:09.259971 | orchestrator | 2025-08-29 15:08:09.259977 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-08-29 15:08:09.259984 | orchestrator | Friday 29 August 2025 15:06:43 +0000 (0:00:01.081) 0:00:04.504 ********* 2025-08-29 15:08:09.259990 | orchestrator | changed: [testbed-manager] 2025-08-29 15:08:09.259996 | orchestrator | 2025-08-29 15:08:09.260002 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-08-29 15:08:09.260009 | orchestrator | Friday 29 August 2025 15:06:45 +0000 (0:00:01.154) 0:00:05.658 ********* 2025-08-29 15:08:09.260015 | orchestrator | changed: [testbed-manager] 2025-08-29 15:08:09.260021 | orchestrator | 2025-08-29 15:08:09.260027 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-08-29 15:08:09.260034 | orchestrator | Friday 29 August 2025 15:06:46 +0000 (0:00:01.005) 0:00:06.663 ********* 2025-08-29 15:08:09.260040 | orchestrator | changed: [testbed-manager] 2025-08-29 15:08:09.260046 | orchestrator | 2025-08-29 15:08:09.260052 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-08-29 15:08:09.260058 | orchestrator | Friday 29 August 2025 15:06:47 +0000 (0:00:01.116) 0:00:07.780 ********* 2025-08-29 15:08:09.260065 | orchestrator | changed: [testbed-manager] 2025-08-29 15:08:09.260071 | orchestrator | 2025-08-29 15:08:09.260077 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-08-29 15:08:09.260083 | orchestrator | Friday 29 August 2025 15:06:49 +0000 (0:00:01.977) 0:00:09.758 ********* 2025-08-29 15:08:09.260089 | orchestrator | changed: [testbed-manager] 2025-08-29 15:08:09.260096 | orchestrator | 2025-08-29 15:08:09.260102 | orchestrator | TASK [Create admin user] ******************************************************* 2025-08-29 15:08:09.260108 | orchestrator | Friday 29 August 2025 15:06:50 +0000 (0:00:01.243) 0:00:11.001 ********* 2025-08-29 15:08:09.260120 | orchestrator | changed: [testbed-manager] 2025-08-29 15:08:09.260126 | orchestrator | 2025-08-29 15:08:09.260132 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-08-29 15:08:09.260139 | orchestrator | Friday 29 August 2025 15:07:43 +0000 (0:00:53.417) 0:01:04.418 ********* 2025-08-29 15:08:09.260145 | orchestrator | skipping: [testbed-manager] 2025-08-29 15:08:09.260151 | orchestrator | 2025-08-29 15:08:09.260157 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-08-29 15:08:09.260163 | orchestrator | 2025-08-29 15:08:09.260170 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-08-29 15:08:09.260176 | orchestrator | Friday 29 August 2025 15:07:44 +0000 (0:00:00.157) 0:01:04.576 ********* 2025-08-29 15:08:09.260182 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:08:09.260188 | orchestrator | 2025-08-29 15:08:09.260194 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-08-29 15:08:09.260201 | orchestrator | 2025-08-29 15:08:09.260207 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-08-29 15:08:09.260213 | orchestrator | Friday 29 August 2025 15:07:55 +0000 (0:00:11.628) 0:01:16.205 ********* 2025-08-29 15:08:09.260219 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:08:09.260225 | orchestrator | 2025-08-29 15:08:09.260232 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-08-29 15:08:09.260238 | orchestrator | 2025-08-29 15:08:09.260244 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-08-29 15:08:09.260251 | orchestrator | Friday 29 August 2025 15:08:06 +0000 (0:00:11.276) 0:01:27.481 ********* 2025-08-29 15:08:09.260257 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:08:09.260263 | orchestrator | 2025-08-29 15:08:09.260274 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:08:09.260329 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-08-29 15:08:09.260338 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:08:09.260345 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:08:09.260351 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:08:09.260357 | orchestrator | 2025-08-29 15:08:09.260363 | orchestrator | 2025-08-29 15:08:09.260370 | orchestrator | 2025-08-29 15:08:09.260380 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:08:09.260503 | orchestrator | Friday 29 August 2025 15:08:08 +0000 (0:00:01.150) 0:01:28.632 ********* 2025-08-29 15:08:09.260512 | orchestrator | =============================================================================== 2025-08-29 15:08:09.260518 | orchestrator | Create admin user ------------------------------------------------------ 53.42s 2025-08-29 15:08:09.260525 | orchestrator | Restart ceph manager service ------------------------------------------- 24.06s 2025-08-29 15:08:09.260531 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.26s 2025-08-29 15:08:09.260537 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.98s 2025-08-29 15:08:09.260543 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.24s 2025-08-29 15:08:09.260550 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.15s 2025-08-29 15:08:09.260556 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.12s 2025-08-29 15:08:09.260562 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.08s 2025-08-29 15:08:09.260568 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.01s 2025-08-29 15:08:09.260574 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.90s 2025-08-29 15:08:09.260588 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.16s 2025-08-29 15:08:09.260594 | orchestrator | 2025-08-29 15:08:09 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:08:09.260604 | orchestrator | 2025-08-29 15:08:09 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:08:09.260611 | orchestrator | 2025-08-29 15:08:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:12.280045 | orchestrator | 2025-08-29 15:08:12 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:08:12.280494 | orchestrator | 2025-08-29 15:08:12 | INFO  | Task f70e9555-337a-4d13-bb37-b24818d37a45 is in state STARTED 2025-08-29 15:08:12.281068 | orchestrator | 2025-08-29 15:08:12 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:08:12.281871 | orchestrator | 2025-08-29 15:08:12 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:08:12.281894 | orchestrator | 2025-08-29 15:08:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:15.307532 | orchestrator | 2025-08-29 15:08:15 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:08:15.307906 | orchestrator | 2025-08-29 15:08:15 | INFO  | Task f70e9555-337a-4d13-bb37-b24818d37a45 is in state STARTED 2025-08-29 15:08:15.308650 | orchestrator | 2025-08-29 15:08:15 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:08:15.309364 | orchestrator | 2025-08-29 15:08:15 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:08:15.309408 | orchestrator | 2025-08-29 15:08:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:18.334449 | orchestrator | 2025-08-29 15:08:18 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:08:18.334755 | orchestrator | 2025-08-29 15:08:18 | INFO  | Task f70e9555-337a-4d13-bb37-b24818d37a45 is in state STARTED 2025-08-29 15:08:18.335375 | orchestrator | 2025-08-29 15:08:18 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:08:18.336183 | orchestrator | 2025-08-29 15:08:18 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:08:18.336213 | orchestrator | 2025-08-29 15:08:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:21.381963 | orchestrator | 2025-08-29 15:08:21 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:08:21.382061 | orchestrator | 2025-08-29 15:08:21 | INFO  | Task f70e9555-337a-4d13-bb37-b24818d37a45 is in state STARTED 2025-08-29 15:08:21.382072 | orchestrator | 2025-08-29 15:08:21 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:08:21.382080 | orchestrator | 2025-08-29 15:08:21 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:08:21.382087 | orchestrator | 2025-08-29 15:08:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:24.404235 | orchestrator | 2025-08-29 15:08:24 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:08:24.404473 | orchestrator | 2025-08-29 15:08:24 | INFO  | Task f70e9555-337a-4d13-bb37-b24818d37a45 is in state STARTED 2025-08-29 15:08:24.405023 | orchestrator | 2025-08-29 15:08:24 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:08:24.405874 | orchestrator | 2025-08-29 15:08:24 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:08:24.405910 | orchestrator | 2025-08-29 15:08:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:27.494549 | orchestrator | 2025-08-29 15:08:27 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:08:27.494644 | orchestrator | 2025-08-29 15:08:27 | INFO  | Task f70e9555-337a-4d13-bb37-b24818d37a45 is in state STARTED 2025-08-29 15:08:27.494662 | orchestrator | 2025-08-29 15:08:27 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:08:27.494683 | orchestrator | 2025-08-29 15:08:27 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:08:27.494701 | orchestrator | 2025-08-29 15:08:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:30.469532 | orchestrator | 2025-08-29 15:08:30 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:08:30.470550 | orchestrator | 2025-08-29 15:08:30 | INFO  | Task f70e9555-337a-4d13-bb37-b24818d37a45 is in state STARTED 2025-08-29 15:08:30.471417 | orchestrator | 2025-08-29 15:08:30 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:08:30.472069 | orchestrator | 2025-08-29 15:08:30 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:08:30.472189 | orchestrator | 2025-08-29 15:08:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:33.502889 | orchestrator | 2025-08-29 15:08:33 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:08:33.503583 | orchestrator | 2025-08-29 15:08:33 | INFO  | Task f70e9555-337a-4d13-bb37-b24818d37a45 is in state STARTED 2025-08-29 15:08:33.504288 | orchestrator | 2025-08-29 15:08:33 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:08:33.505195 | orchestrator | 2025-08-29 15:08:33 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:08:33.505239 | orchestrator | 2025-08-29 15:08:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:36.574745 | orchestrator | 2025-08-29 15:08:36 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:08:36.574834 | orchestrator | 2025-08-29 15:08:36 | INFO  | Task f70e9555-337a-4d13-bb37-b24818d37a45 is in state STARTED 2025-08-29 15:08:36.576307 | orchestrator | 2025-08-29 15:08:36 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:08:36.577076 | orchestrator | 2025-08-29 15:08:36 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:08:36.577105 | orchestrator | 2025-08-29 15:08:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:39.621544 | orchestrator | 2025-08-29 15:08:39 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:08:39.625171 | orchestrator | 2025-08-29 15:08:39 | INFO  | Task f70e9555-337a-4d13-bb37-b24818d37a45 is in state STARTED 2025-08-29 15:08:39.627494 | orchestrator | 2025-08-29 15:08:39 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:08:39.630122 | orchestrator | 2025-08-29 15:08:39 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:08:39.630190 | orchestrator | 2025-08-29 15:08:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:42.681592 | orchestrator | 2025-08-29 15:08:42 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:08:42.683101 | orchestrator | 2025-08-29 15:08:42 | INFO  | Task f70e9555-337a-4d13-bb37-b24818d37a45 is in state STARTED 2025-08-29 15:08:42.684608 | orchestrator | 2025-08-29 15:08:42 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:08:42.687136 | orchestrator | 2025-08-29 15:08:42 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:08:42.687177 | orchestrator | 2025-08-29 15:08:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:45.741161 | orchestrator | 2025-08-29 15:08:45 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:08:45.744661 | orchestrator | 2025-08-29 15:08:45 | INFO  | Task f70e9555-337a-4d13-bb37-b24818d37a45 is in state STARTED 2025-08-29 15:08:45.748707 | orchestrator | 2025-08-29 15:08:45 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:08:45.751256 | orchestrator | 2025-08-29 15:08:45 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:08:45.751518 | orchestrator | 2025-08-29 15:08:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:48.799192 | orchestrator | 2025-08-29 15:08:48 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:08:48.799587 | orchestrator | 2025-08-29 15:08:48 | INFO  | Task f70e9555-337a-4d13-bb37-b24818d37a45 is in state STARTED 2025-08-29 15:08:48.801532 | orchestrator | 2025-08-29 15:08:48 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:08:48.802260 | orchestrator | 2025-08-29 15:08:48 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:08:48.802293 | orchestrator | 2025-08-29 15:08:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:51.845503 | orchestrator | 2025-08-29 15:08:51 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:08:51.846928 | orchestrator | 2025-08-29 15:08:51 | INFO  | Task f70e9555-337a-4d13-bb37-b24818d37a45 is in state STARTED 2025-08-29 15:08:51.849728 | orchestrator | 2025-08-29 15:08:51 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:08:51.850787 | orchestrator | 2025-08-29 15:08:51 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:08:51.850822 | orchestrator | 2025-08-29 15:08:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:54.899546 | orchestrator | 2025-08-29 15:08:54 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:08:54.901318 | orchestrator | 2025-08-29 15:08:54 | INFO  | Task f70e9555-337a-4d13-bb37-b24818d37a45 is in state STARTED 2025-08-29 15:08:54.904828 | orchestrator | 2025-08-29 15:08:54 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:08:54.906143 | orchestrator | 2025-08-29 15:08:54 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:08:54.906695 | orchestrator | 2025-08-29 15:08:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:08:57.955722 | orchestrator | 2025-08-29 15:08:57 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:08:57.958484 | orchestrator | 2025-08-29 15:08:57 | INFO  | Task f70e9555-337a-4d13-bb37-b24818d37a45 is in state STARTED 2025-08-29 15:08:57.960400 | orchestrator | 2025-08-29 15:08:57 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:08:57.962932 | orchestrator | 2025-08-29 15:08:57 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:08:57.962994 | orchestrator | 2025-08-29 15:08:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:01.007205 | orchestrator | 2025-08-29 15:09:01 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:09:01.010201 | orchestrator | 2025-08-29 15:09:01 | INFO  | Task f70e9555-337a-4d13-bb37-b24818d37a45 is in state STARTED 2025-08-29 15:09:01.012734 | orchestrator | 2025-08-29 15:09:01 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:09:01.015577 | orchestrator | 2025-08-29 15:09:01 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:09:01.015642 | orchestrator | 2025-08-29 15:09:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:04.062065 | orchestrator | 2025-08-29 15:09:04 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:09:04.063816 | orchestrator | 2025-08-29 15:09:04 | INFO  | Task f70e9555-337a-4d13-bb37-b24818d37a45 is in state STARTED 2025-08-29 15:09:04.065444 | orchestrator | 2025-08-29 15:09:04 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:09:04.068216 | orchestrator | 2025-08-29 15:09:04 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:09:04.068307 | orchestrator | 2025-08-29 15:09:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:07.107399 | orchestrator | 2025-08-29 15:09:07 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:09:07.109341 | orchestrator | 2025-08-29 15:09:07 | INFO  | Task f70e9555-337a-4d13-bb37-b24818d37a45 is in state STARTED 2025-08-29 15:09:07.112177 | orchestrator | 2025-08-29 15:09:07 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:09:07.114114 | orchestrator | 2025-08-29 15:09:07 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:09:07.114213 | orchestrator | 2025-08-29 15:09:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:10.162100 | orchestrator | 2025-08-29 15:09:10 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:09:10.163113 | orchestrator | 2025-08-29 15:09:10 | INFO  | Task f70e9555-337a-4d13-bb37-b24818d37a45 is in state STARTED 2025-08-29 15:09:10.164754 | orchestrator | 2025-08-29 15:09:10 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:09:10.166104 | orchestrator | 2025-08-29 15:09:10 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:09:10.166244 | orchestrator | 2025-08-29 15:09:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:13.197183 | orchestrator | 2025-08-29 15:09:13 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:09:13.198280 | orchestrator | 2025-08-29 15:09:13 | INFO  | Task f70e9555-337a-4d13-bb37-b24818d37a45 is in state STARTED 2025-08-29 15:09:13.199975 | orchestrator | 2025-08-29 15:09:13 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:09:13.200465 | orchestrator | 2025-08-29 15:09:13 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:09:13.200488 | orchestrator | 2025-08-29 15:09:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:16.265259 | orchestrator | 2025-08-29 15:09:16 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:09:16.265790 | orchestrator | 2025-08-29 15:09:16 | INFO  | Task f70e9555-337a-4d13-bb37-b24818d37a45 is in state STARTED 2025-08-29 15:09:16.266749 | orchestrator | 2025-08-29 15:09:16 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:09:16.268845 | orchestrator | 2025-08-29 15:09:16 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:09:16.268895 | orchestrator | 2025-08-29 15:09:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:19.309918 | orchestrator | 2025-08-29 15:09:19 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:09:19.311279 | orchestrator | 2025-08-29 15:09:19 | INFO  | Task f70e9555-337a-4d13-bb37-b24818d37a45 is in state STARTED 2025-08-29 15:09:19.311906 | orchestrator | 2025-08-29 15:09:19 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:09:19.312688 | orchestrator | 2025-08-29 15:09:19 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:09:19.312720 | orchestrator | 2025-08-29 15:09:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:22.334990 | orchestrator | 2025-08-29 15:09:22 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:09:22.336505 | orchestrator | 2025-08-29 15:09:22 | INFO  | Task f70e9555-337a-4d13-bb37-b24818d37a45 is in state STARTED 2025-08-29 15:09:22.337804 | orchestrator | 2025-08-29 15:09:22 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:09:22.339037 | orchestrator | 2025-08-29 15:09:22 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:09:22.339168 | orchestrator | 2025-08-29 15:09:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:25.369975 | orchestrator | 2025-08-29 15:09:25 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:09:25.370816 | orchestrator | 2025-08-29 15:09:25 | INFO  | Task f70e9555-337a-4d13-bb37-b24818d37a45 is in state STARTED 2025-08-29 15:09:25.370903 | orchestrator | 2025-08-29 15:09:25 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:09:25.371872 | orchestrator | 2025-08-29 15:09:25 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:09:25.371939 | orchestrator | 2025-08-29 15:09:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:28.398100 | orchestrator | 2025-08-29 15:09:28 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:09:28.398335 | orchestrator | 2025-08-29 15:09:28 | INFO  | Task f70e9555-337a-4d13-bb37-b24818d37a45 is in state STARTED 2025-08-29 15:09:28.399127 | orchestrator | 2025-08-29 15:09:28 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:09:28.402182 | orchestrator | 2025-08-29 15:09:28 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:09:28.402286 | orchestrator | 2025-08-29 15:09:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:31.464525 | orchestrator | 2025-08-29 15:09:31 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:09:31.464612 | orchestrator | 2025-08-29 15:09:31 | INFO  | Task f70e9555-337a-4d13-bb37-b24818d37a45 is in state STARTED 2025-08-29 15:09:31.464623 | orchestrator | 2025-08-29 15:09:31 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:09:31.464631 | orchestrator | 2025-08-29 15:09:31 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:09:31.464639 | orchestrator | 2025-08-29 15:09:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:34.495578 | orchestrator | 2025-08-29 15:09:34 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:09:34.495664 | orchestrator | 2025-08-29 15:09:34 | INFO  | Task f70e9555-337a-4d13-bb37-b24818d37a45 is in state STARTED 2025-08-29 15:09:34.496809 | orchestrator | 2025-08-29 15:09:34 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:09:34.498252 | orchestrator | 2025-08-29 15:09:34 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:09:34.498310 | orchestrator | 2025-08-29 15:09:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:37.539181 | orchestrator | 2025-08-29 15:09:37 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:09:37.541654 | orchestrator | 2025-08-29 15:09:37 | INFO  | Task f70e9555-337a-4d13-bb37-b24818d37a45 is in state STARTED 2025-08-29 15:09:37.544068 | orchestrator | 2025-08-29 15:09:37 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:09:37.546577 | orchestrator | 2025-08-29 15:09:37 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:09:37.546611 | orchestrator | 2025-08-29 15:09:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:40.601516 | orchestrator | 2025-08-29 15:09:40 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:09:40.602870 | orchestrator | 2025-08-29 15:09:40 | INFO  | Task f70e9555-337a-4d13-bb37-b24818d37a45 is in state STARTED 2025-08-29 15:09:40.604867 | orchestrator | 2025-08-29 15:09:40 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:09:40.607581 | orchestrator | 2025-08-29 15:09:40 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:09:40.607635 | orchestrator | 2025-08-29 15:09:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:43.642742 | orchestrator | 2025-08-29 15:09:43 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:09:43.642975 | orchestrator | 2025-08-29 15:09:43 | INFO  | Task f70e9555-337a-4d13-bb37-b24818d37a45 is in state STARTED 2025-08-29 15:09:43.643263 | orchestrator | 2025-08-29 15:09:43 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:09:43.644049 | orchestrator | 2025-08-29 15:09:43 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:09:43.644084 | orchestrator | 2025-08-29 15:09:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:46.689306 | orchestrator | 2025-08-29 15:09:46 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:09:46.689786 | orchestrator | 2025-08-29 15:09:46 | INFO  | Task f70e9555-337a-4d13-bb37-b24818d37a45 is in state SUCCESS 2025-08-29 15:09:46.691188 | orchestrator | 2025-08-29 15:09:46.691238 | orchestrator | 2025-08-29 15:09:46.691246 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:09:46.691253 | orchestrator | 2025-08-29 15:09:46.691261 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:09:46.691268 | orchestrator | Friday 29 August 2025 15:06:47 +0000 (0:00:00.289) 0:00:00.289 ********* 2025-08-29 15:09:46.691275 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:09:46.691283 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:09:46.691290 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:09:46.691297 | orchestrator | 2025-08-29 15:09:46.691304 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:09:46.691311 | orchestrator | Friday 29 August 2025 15:06:47 +0000 (0:00:00.361) 0:00:00.651 ********* 2025-08-29 15:09:46.691318 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-08-29 15:09:46.691325 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-08-29 15:09:46.691332 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-08-29 15:09:46.691375 | orchestrator | 2025-08-29 15:09:46.691387 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-08-29 15:09:46.691395 | orchestrator | 2025-08-29 15:09:46.691401 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-08-29 15:09:46.691429 | orchestrator | Friday 29 August 2025 15:06:47 +0000 (0:00:00.440) 0:00:01.091 ********* 2025-08-29 15:09:46.691450 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:09:46.691458 | orchestrator | 2025-08-29 15:09:46.691464 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-08-29 15:09:46.691471 | orchestrator | Friday 29 August 2025 15:06:48 +0000 (0:00:00.505) 0:00:01.596 ********* 2025-08-29 15:09:46.691478 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-08-29 15:09:46.691485 | orchestrator | 2025-08-29 15:09:46.691492 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-08-29 15:09:46.691499 | orchestrator | Friday 29 August 2025 15:06:52 +0000 (0:00:04.395) 0:00:05.992 ********* 2025-08-29 15:09:46.691506 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-08-29 15:09:46.691513 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-08-29 15:09:46.691519 | orchestrator | 2025-08-29 15:09:46.691526 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-08-29 15:09:46.691533 | orchestrator | Friday 29 August 2025 15:06:59 +0000 (0:00:06.346) 0:00:12.338 ********* 2025-08-29 15:09:46.691540 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-08-29 15:09:46.691546 | orchestrator | 2025-08-29 15:09:46.691553 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-08-29 15:09:46.691560 | orchestrator | Friday 29 August 2025 15:07:02 +0000 (0:00:03.609) 0:00:15.948 ********* 2025-08-29 15:09:46.691581 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 15:09:46.691588 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-08-29 15:09:46.691595 | orchestrator | 2025-08-29 15:09:46.691602 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-08-29 15:09:46.691608 | orchestrator | Friday 29 August 2025 15:07:07 +0000 (0:00:04.628) 0:00:20.577 ********* 2025-08-29 15:09:46.691615 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 15:09:46.691622 | orchestrator | 2025-08-29 15:09:46.691628 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-08-29 15:09:46.691635 | orchestrator | Friday 29 August 2025 15:07:10 +0000 (0:00:03.455) 0:00:24.032 ********* 2025-08-29 15:09:46.691641 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-08-29 15:09:46.691648 | orchestrator | 2025-08-29 15:09:46.691654 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-08-29 15:09:46.691661 | orchestrator | Friday 29 August 2025 15:07:15 +0000 (0:00:04.612) 0:00:28.645 ********* 2025-08-29 15:09:46.691683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 15:09:46.691704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 15:09:46.691713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 15:09:46.691721 | orchestrator | 2025-08-29 15:09:46.691728 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-08-29 15:09:46.691740 | orchestrator | Friday 29 August 2025 15:07:22 +0000 (0:00:07.545) 0:00:36.191 ********* 2025-08-29 15:09:46.691747 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:09:46.691754 | orchestrator | 2025-08-29 15:09:46.691765 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-08-29 15:09:46.691772 | orchestrator | Friday 29 August 2025 15:07:23 +0000 (0:00:00.605) 0:00:36.797 ********* 2025-08-29 15:09:46.691779 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:09:46.691786 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:09:46.691793 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:09:46.691799 | orchestrator | 2025-08-29 15:09:46.691806 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-08-29 15:09:46.691813 | orchestrator | Friday 29 August 2025 15:07:27 +0000 (0:00:03.793) 0:00:40.590 ********* 2025-08-29 15:09:46.691819 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 15:09:46.691826 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 15:09:46.691833 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 15:09:46.691840 | orchestrator | 2025-08-29 15:09:46.691847 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-08-29 15:09:46.691853 | orchestrator | Friday 29 August 2025 15:07:28 +0000 (0:00:01.590) 0:00:42.181 ********* 2025-08-29 15:09:46.691863 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 15:09:46.691870 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 15:09:46.691877 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 15:09:46.691884 | orchestrator | 2025-08-29 15:09:46.691890 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-08-29 15:09:46.691898 | orchestrator | Friday 29 August 2025 15:07:30 +0000 (0:00:01.139) 0:00:43.320 ********* 2025-08-29 15:09:46.691904 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:09:46.691911 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:09:46.691918 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:09:46.691924 | orchestrator | 2025-08-29 15:09:46.691931 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-08-29 15:09:46.691938 | orchestrator | Friday 29 August 2025 15:07:30 +0000 (0:00:00.629) 0:00:43.949 ********* 2025-08-29 15:09:46.691949 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:09:46.691960 | orchestrator | 2025-08-29 15:09:46.691970 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-08-29 15:09:46.691980 | orchestrator | Friday 29 August 2025 15:07:31 +0000 (0:00:00.320) 0:00:44.270 ********* 2025-08-29 15:09:46.691991 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:09:46.692003 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:09:46.692015 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:09:46.692025 | orchestrator | 2025-08-29 15:09:46.692037 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-08-29 15:09:46.692044 | orchestrator | Friday 29 August 2025 15:07:31 +0000 (0:00:00.297) 0:00:44.567 ********* 2025-08-29 15:09:46.692051 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:09:46.692058 | orchestrator | 2025-08-29 15:09:46.692064 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-08-29 15:09:46.692070 | orchestrator | Friday 29 August 2025 15:07:31 +0000 (0:00:00.540) 0:00:45.107 ********* 2025-08-29 15:09:46.692083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 15:09:46.692102 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 15:09:46.692110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 15:09:46.692122 | orchestrator | 2025-08-29 15:09:46.692129 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-08-29 15:09:46.692135 | orchestrator | Friday 29 August 2025 15:07:35 +0000 (0:00:03.874) 0:00:48.982 ********* 2025-08-29 15:09:46.692152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 15:09:46.692160 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:09:46.692168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 15:09:46.692179 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:09:46.692192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 15:09:46.692200 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:09:46.692207 | orchestrator | 2025-08-29 15:09:46.692213 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-08-29 15:09:46.692220 | orchestrator | Friday 29 August 2025 15:07:41 +0000 (0:00:05.459) 0:00:54.441 ********* 2025-08-29 15:09:46.692238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 15:09:46.692250 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:09:46.692262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 15:09:46.692269 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:09:46.692280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-08-29 15:09:46.692287 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:09:46.692300 | orchestrator | 2025-08-29 15:09:46.692306 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-08-29 15:09:46.692313 | orchestrator | Friday 29 August 2025 15:07:44 +0000 (0:00:03.760) 0:00:58.202 ********* 2025-08-29 15:09:46.692319 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:09:46.692326 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:09:46.692333 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:09:46.692358 | orchestrator | 2025-08-29 15:09:46.692365 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-08-29 15:09:46.692372 | orchestrator | Friday 29 August 2025 15:07:49 +0000 (0:00:04.116) 0:01:02.318 ********* 2025-08-29 15:09:46.692384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 15:09:46.692396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 15:09:46.692420 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 15:09:46.692427 | orchestrator | 2025-08-29 15:09:46.692434 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-08-29 15:09:46.692441 | orchestrator | Friday 29 August 2025 15:07:54 +0000 (0:00:05.006) 0:01:07.324 ********* 2025-08-29 15:09:46.692447 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:09:46.692454 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:09:46.692460 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:09:46.692467 | orchestrator | 2025-08-29 15:09:46.692473 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-08-29 15:09:46.692480 | orchestrator | Friday 29 August 2025 15:08:00 +0000 (0:00:06.700) 0:01:14.024 ********* 2025-08-29 15:09:46.692486 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:09:46.692493 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:09:46.692500 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:09:46.692506 | orchestrator | 2025-08-29 15:09:46.692513 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-08-29 15:09:46.692644 | orchestrator | Friday 29 August 2025 15:08:05 +0000 (0:00:04.696) 0:01:18.721 ********* 2025-08-29 15:09:46.692654 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:09:46.692661 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:09:46.692668 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:09:46.692674 | orchestrator | 2025-08-29 15:09:46.692681 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-08-29 15:09:46.692687 | orchestrator | Friday 29 August 2025 15:08:10 +0000 (0:00:04.701) 0:01:23.422 ********* 2025-08-29 15:09:46.692694 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:09:46.692701 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:09:46.692707 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:09:46.692714 | orchestrator | 2025-08-29 15:09:46.692721 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-08-29 15:09:46.692727 | orchestrator | Friday 29 August 2025 15:08:16 +0000 (0:00:05.902) 0:01:29.324 ********* 2025-08-29 15:09:46.692734 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:09:46.692740 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:09:46.692747 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:09:46.692754 | orchestrator | 2025-08-29 15:09:46.692766 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-08-29 15:09:46.692773 | orchestrator | Friday 29 August 2025 15:08:20 +0000 (0:00:04.139) 0:01:33.464 ********* 2025-08-29 15:09:46.692779 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:09:46.692789 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:09:46.692801 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:09:46.692812 | orchestrator | 2025-08-29 15:09:46.692823 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-08-29 15:09:46.692834 | orchestrator | Friday 29 August 2025 15:08:20 +0000 (0:00:00.451) 0:01:33.916 ********* 2025-08-29 15:09:46.692852 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-08-29 15:09:46.692866 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:09:46.692876 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-08-29 15:09:46.692885 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:09:46.692896 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-08-29 15:09:46.692906 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:09:46.692917 | orchestrator | 2025-08-29 15:09:46.692927 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-08-29 15:09:46.692938 | orchestrator | Friday 29 August 2025 15:08:26 +0000 (0:00:06.077) 0:01:39.993 ********* 2025-08-29 15:09:46.692950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 15:09:46.692976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 15:09:46.692995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-08-29 15:09:46.693008 | orchestrator | 2025-08-29 15:09:46.693018 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-08-29 15:09:46.693028 | orchestrator | Friday 29 August 2025 15:08:32 +0000 (0:00:06.012) 0:01:46.006 ********* 2025-08-29 15:09:46.693040 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:09:46.693051 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:09:46.693063 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:09:46.693074 | orchestrator | 2025-08-29 15:09:46.693086 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-08-29 15:09:46.693098 | orchestrator | Friday 29 August 2025 15:08:33 +0000 (0:00:00.342) 0:01:46.348 ********* 2025-08-29 15:09:46.693105 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:09:46.693111 | orchestrator | 2025-08-29 15:09:46.693118 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-08-29 15:09:46.693125 | orchestrator | Friday 29 August 2025 15:08:35 +0000 (0:00:02.051) 0:01:48.399 ********* 2025-08-29 15:09:46.693131 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:09:46.693138 | orchestrator | 2025-08-29 15:09:46.693144 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-08-29 15:09:46.693151 | orchestrator | Friday 29 August 2025 15:08:37 +0000 (0:00:01.867) 0:01:50.267 ********* 2025-08-29 15:09:46.693157 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:09:46.693164 | orchestrator | 2025-08-29 15:09:46.693170 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-08-29 15:09:46.693183 | orchestrator | Friday 29 August 2025 15:08:39 +0000 (0:00:02.316) 0:01:52.584 ********* 2025-08-29 15:09:46.693189 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:09:46.693196 | orchestrator | 2025-08-29 15:09:46.693203 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-08-29 15:09:46.693209 | orchestrator | Friday 29 August 2025 15:09:08 +0000 (0:00:28.706) 0:02:21.290 ********* 2025-08-29 15:09:46.693216 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:09:46.693222 | orchestrator | 2025-08-29 15:09:46.693234 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-08-29 15:09:46.693241 | orchestrator | Friday 29 August 2025 15:09:10 +0000 (0:00:02.087) 0:02:23.378 ********* 2025-08-29 15:09:46.693247 | orchestrator | 2025-08-29 15:09:46.693254 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-08-29 15:09:46.693260 | orchestrator | Friday 29 August 2025 15:09:10 +0000 (0:00:00.075) 0:02:23.453 ********* 2025-08-29 15:09:46.693267 | orchestrator | 2025-08-29 15:09:46.693274 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-08-29 15:09:46.693281 | orchestrator | Friday 29 August 2025 15:09:10 +0000 (0:00:00.067) 0:02:23.521 ********* 2025-08-29 15:09:46.693290 | orchestrator | 2025-08-29 15:09:46.693297 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-08-29 15:09:46.693305 | orchestrator | Friday 29 August 2025 15:09:10 +0000 (0:00:00.067) 0:02:23.589 ********* 2025-08-29 15:09:46.693312 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:09:46.693320 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:09:46.693328 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:09:46.693362 | orchestrator | 2025-08-29 15:09:46.693371 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:09:46.693385 | orchestrator | testbed-node-0 : ok=26  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-08-29 15:09:46.693395 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-08-29 15:09:46.693404 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-08-29 15:09:46.693413 | orchestrator | 2025-08-29 15:09:46.693421 | orchestrator | 2025-08-29 15:09:46.693429 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:09:46.693437 | orchestrator | Friday 29 August 2025 15:09:44 +0000 (0:00:34.386) 0:02:57.976 ********* 2025-08-29 15:09:46.693446 | orchestrator | =============================================================================== 2025-08-29 15:09:46.693454 | orchestrator | glance : Restart glance-api container ---------------------------------- 34.39s 2025-08-29 15:09:46.693462 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 28.71s 2025-08-29 15:09:46.693470 | orchestrator | glance : Ensuring config directories exist ------------------------------ 7.55s 2025-08-29 15:09:46.693478 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 6.70s 2025-08-29 15:09:46.693486 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.35s 2025-08-29 15:09:46.693495 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 6.08s 2025-08-29 15:09:46.693503 | orchestrator | glance : Check glance containers ---------------------------------------- 6.01s 2025-08-29 15:09:46.693511 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 5.90s 2025-08-29 15:09:46.693519 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 5.46s 2025-08-29 15:09:46.693528 | orchestrator | glance : Copying over config.json files for services -------------------- 5.01s 2025-08-29 15:09:46.693536 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 4.70s 2025-08-29 15:09:46.693552 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 4.70s 2025-08-29 15:09:46.693560 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.63s 2025-08-29 15:09:46.693569 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.61s 2025-08-29 15:09:46.693578 | orchestrator | service-ks-register : glance | Creating services ------------------------ 4.40s 2025-08-29 15:09:46.693586 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 4.14s 2025-08-29 15:09:46.693594 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 4.12s 2025-08-29 15:09:46.693602 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 3.87s 2025-08-29 15:09:46.693611 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.79s 2025-08-29 15:09:46.693619 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.76s 2025-08-29 15:09:46.693627 | orchestrator | 2025-08-29 15:09:46 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:09:46.693636 | orchestrator | 2025-08-29 15:09:46 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:09:46.693839 | orchestrator | 2025-08-29 15:09:46 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:09:46.693854 | orchestrator | 2025-08-29 15:09:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:49.733199 | orchestrator | 2025-08-29 15:09:49 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:09:49.735288 | orchestrator | 2025-08-29 15:09:49 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:09:49.736992 | orchestrator | 2025-08-29 15:09:49 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:09:49.738427 | orchestrator | 2025-08-29 15:09:49 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:09:49.738649 | orchestrator | 2025-08-29 15:09:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:52.773824 | orchestrator | 2025-08-29 15:09:52 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:09:52.775894 | orchestrator | 2025-08-29 15:09:52 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:09:52.778038 | orchestrator | 2025-08-29 15:09:52 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:09:52.780233 | orchestrator | 2025-08-29 15:09:52 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:09:52.781184 | orchestrator | 2025-08-29 15:09:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:55.829018 | orchestrator | 2025-08-29 15:09:55 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:09:55.831040 | orchestrator | 2025-08-29 15:09:55 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:09:55.832719 | orchestrator | 2025-08-29 15:09:55 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:09:55.833615 | orchestrator | 2025-08-29 15:09:55 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:09:55.833827 | orchestrator | 2025-08-29 15:09:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:09:58.866962 | orchestrator | 2025-08-29 15:09:58 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:09:58.871526 | orchestrator | 2025-08-29 15:09:58 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:09:58.872296 | orchestrator | 2025-08-29 15:09:58 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:09:58.874111 | orchestrator | 2025-08-29 15:09:58 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:09:58.874361 | orchestrator | 2025-08-29 15:09:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:01.910280 | orchestrator | 2025-08-29 15:10:01 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:10:01.911219 | orchestrator | 2025-08-29 15:10:01 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:10:01.911878 | orchestrator | 2025-08-29 15:10:01 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:10:01.913027 | orchestrator | 2025-08-29 15:10:01 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:10:01.913072 | orchestrator | 2025-08-29 15:10:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:04.963007 | orchestrator | 2025-08-29 15:10:04 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:10:04.964258 | orchestrator | 2025-08-29 15:10:04 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:10:04.967103 | orchestrator | 2025-08-29 15:10:04 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:10:04.969720 | orchestrator | 2025-08-29 15:10:04 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state STARTED 2025-08-29 15:10:04.969771 | orchestrator | 2025-08-29 15:10:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:08.021570 | orchestrator | 2025-08-29 15:10:08 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:10:08.022681 | orchestrator | 2025-08-29 15:10:08 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:10:08.024537 | orchestrator | 2025-08-29 15:10:08 | INFO  | Task 4db36d32-af5b-4fe1-9154-53089ba7cf07 is in state STARTED 2025-08-29 15:10:08.026205 | orchestrator | 2025-08-29 15:10:08 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:10:08.031040 | orchestrator | 2025-08-29 15:10:08 | INFO  | Task 12e74d17-2d33-433f-9507-630bc0152ded is in state SUCCESS 2025-08-29 15:10:08.033925 | orchestrator | 2025-08-29 15:10:08.034122 | orchestrator | 2025-08-29 15:10:08.034160 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:10:08.034188 | orchestrator | 2025-08-29 15:10:08.034208 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:10:08.034230 | orchestrator | Friday 29 August 2025 15:06:39 +0000 (0:00:00.354) 0:00:00.354 ********* 2025-08-29 15:10:08.034258 | orchestrator | ok: [testbed-manager] 2025-08-29 15:10:08.034285 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:10:08.034312 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:10:08.035064 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:10:08.035088 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:10:08.035099 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:10:08.035110 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:10:08.035121 | orchestrator | 2025-08-29 15:10:08.035134 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:10:08.035146 | orchestrator | Friday 29 August 2025 15:06:40 +0000 (0:00:00.940) 0:00:01.294 ********* 2025-08-29 15:10:08.035158 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-08-29 15:10:08.035170 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-08-29 15:10:08.035181 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-08-29 15:10:08.035192 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-08-29 15:10:08.035203 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-08-29 15:10:08.035393 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-08-29 15:10:08.035409 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-08-29 15:10:08.035427 | orchestrator | 2025-08-29 15:10:08.035445 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-08-29 15:10:08.035472 | orchestrator | 2025-08-29 15:10:08.035491 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-08-29 15:10:08.035508 | orchestrator | Friday 29 August 2025 15:06:41 +0000 (0:00:00.767) 0:00:02.062 ********* 2025-08-29 15:10:08.035546 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:10:08.035566 | orchestrator | 2025-08-29 15:10:08.035585 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-08-29 15:10:08.035603 | orchestrator | Friday 29 August 2025 15:06:42 +0000 (0:00:01.408) 0:00:03.471 ********* 2025-08-29 15:10:08.035627 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-08-29 15:10:08.035654 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:10:08.035675 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:10:08.035695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:10:08.035744 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:10:08.035767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:10:08.035805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:10:08.035837 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:10:08.035860 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:10:08.036041 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:10:08.036064 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:10:08.036084 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:10:08.036118 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:10:08.036155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:10:08.036175 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:10:08.036205 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-08-29 15:10:08.036231 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:10:08.036251 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 15:10:08.036270 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:10:08.036302 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:10:08.036372 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:10:08.036386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:10:08.036404 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:10:08.036416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:10:08.036428 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 15:10:08.036439 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 15:10:08.036451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:10:08.036479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:10:08.036492 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:10:08.036503 | orchestrator | 2025-08-29 15:10:08.036514 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-08-29 15:10:08.036526 | orchestrator | Friday 29 August 2025 15:06:46 +0000 (0:00:03.638) 0:00:07.109 ********* 2025-08-29 15:10:08.036537 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:10:08.036549 | orchestrator | 2025-08-29 15:10:08.036560 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-08-29 15:10:08.036570 | orchestrator | Friday 29 August 2025 15:06:48 +0000 (0:00:01.707) 0:00:08.817 ********* 2025-08-29 15:10:08.036587 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-08-29 15:10:08.036600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:10:08.036612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:10:08.036623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:10:08.036650 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:10:08.036662 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:10:08.036677 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:10:08.036703 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:10:08.036724 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:10:08.036742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:10:08.036761 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:10:08.036781 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:10:08.037014 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:10:08.037033 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:10:08.037044 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:10:08.037070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:10:08.037082 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:10:08.037094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:10:08.037106 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-08-29 15:10:08.037137 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 15:10:08.037149 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 15:10:08.037161 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 15:10:08.037177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:10:08.037189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:10:08.037200 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:10:08.037212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:10:08.037230 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:10:08.037248 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:10:08.037260 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:10:08.037271 | orchestrator | 2025-08-29 15:10:08.037283 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-08-29 15:10:08.037295 | orchestrator | Friday 29 August 2025 15:06:53 +0000 (0:00:05.745) 0:00:14.563 ********* 2025-08-29 15:10:08.037312 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-08-29 15:10:08.037380 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 15:10:08.037394 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 15:10:08.037414 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-08-29 15:10:08.037433 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:10:08.037445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 15:10:08.037457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:10:08.037492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:10:08.037505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 15:10:08.037516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:10:08.037535 | orchestrator | skipping: [testbed-manager] 2025-08-29 15:10:08.037547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 15:10:08.037558 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:10:08.037569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:10:08.037587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:10:08.037599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 15:10:08.037610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:10:08.037622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 15:10:08.037633 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:10:08.037645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:10:08.037664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:10:08.037676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 15:10:08.037687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:10:08.037698 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:10:08.037716 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 15:10:08.037763 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 15:10:08.037777 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 15:10:08.037788 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:10:08.037804 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 15:10:08.037815 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 15:10:08.037833 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 15:10:08.037844 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:10:08.037910 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 15:10:08.037921 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 15:10:08.037941 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 15:10:08.037951 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:10:08.037961 | orchestrator | 2025-08-29 15:10:08.037971 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-08-29 15:10:08.037981 | orchestrator | Friday 29 August 2025 15:06:55 +0000 (0:00:01.347) 0:00:15.910 ********* 2025-08-29 15:10:08.037991 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-08-29 15:10:08.038006 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 15:10:08.038079 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 15:10:08.038091 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-08-29 15:10:08.038102 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:10:08.038112 | orchestrator | skipping: [testbed-manager] 2025-08-29 15:10:08.038129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 15:10:08.038140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:10:08.038150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:10:08.038176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 15:10:08.038187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:10:08.038197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 15:10:08.038207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:10:08.038217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:10:08.038234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 15:10:08.038245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:10:08.038255 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:10:08.038265 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:10:08.038279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 15:10:08.038296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:10:08.038307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:10:08.038317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 15:10:08.038348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-08-29 15:10:08.038358 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:10:08.038375 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 15:10:08.038386 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 15:10:08.038396 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 15:10:08.038411 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:10:08.038426 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 15:10:08.038437 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 15:10:08.038447 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 15:10:08.038457 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:10:08.038467 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-08-29 15:10:08.038477 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-08-29 15:10:08.038956 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-08-29 15:10:08.039039 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:10:08.039050 | orchestrator | 2025-08-29 15:10:08.039058 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-08-29 15:10:08.039065 | orchestrator | Friday 29 August 2025 15:06:57 +0000 (0:00:01.883) 0:00:17.794 ********* 2025-08-29 15:10:08.039074 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-08-29 15:10:08.039113 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:10:08.039122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:10:08.039129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:10:08.039135 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:10:08.039142 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:10:08.039162 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:10:08.039170 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:10:08.039182 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:10:08.039194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:10:08.039201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:10:08.039208 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:10:08.039217 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:10:08.039224 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:10:08.039236 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:10:08.039247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:10:08.039255 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:10:08.039265 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:10:08.039272 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 15:10:08.039279 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-08-29 15:10:08.039288 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 15:10:08.039300 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 15:10:08.039312 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:10:08.039319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:10:08.039359 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:10:08.039368 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:10:08.039375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:10:08.039382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:10:08.039389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:10:08.039406 | orchestrator | 2025-08-29 15:10:08.039424 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-08-29 15:10:08.039434 | orchestrator | Friday 29 August 2025 15:07:03 +0000 (0:00:05.847) 0:00:23.641 ********* 2025-08-29 15:10:08.039445 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 15:10:08.039455 | orchestrator | 2025-08-29 15:10:08.039466 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-08-29 15:10:08.039484 | orchestrator | Friday 29 August 2025 15:07:04 +0000 (0:00:01.060) 0:00:24.702 ********* 2025-08-29 15:10:08.039496 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1095306, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9361148, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.039509 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1095306, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9361148, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.039526 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1095306, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9361148, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.039539 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1095332, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9419978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.039551 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1095306, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9361148, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.039561 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1095306, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9361148, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.039587 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1095332, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9419978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.039600 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1095332, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9419978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.039612 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1095289, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9355302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.039627 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1095332, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9419978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.039638 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1095306, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9361148, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.039650 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1095289, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9355302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.039662 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1095332, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9419978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.039688 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1095289, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9355302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.039696 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1095306, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9361148, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:10:08.039703 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1095289, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9355302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.039715 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1095332, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9419978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.039723 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1095289, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9355302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.039730 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1095319, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9389393, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.039737 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1095319, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9389393, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.039758 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1095319, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9389393, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.039766 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1095289, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9355302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.039772 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1095319, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9389393, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.039783 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1095319, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9389393, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.039790 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1095283, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9318645, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.039798 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1095283, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9318645, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.039808 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1095319, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9389393, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.039815 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1095283, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9318645, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.039828 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1095283, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9318645, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.039835 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1095309, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9366376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.039847 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1095309, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9366376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.039854 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1095316, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.937821, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.039861 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1095283, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9318645, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.039872 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1095316, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.937821, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.039879 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1095283, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9318645, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.039891 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1095309, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9366376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.039898 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1095312, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9370136, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.039909 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1095312, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9370136, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.039916 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1095332, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9419978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:10:08.039924 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1095309, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9366376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.039935 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1095309, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9366376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.039942 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1095316, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.937821, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.039955 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1095309, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9366376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.039963 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1095304, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9356854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.039974 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1095304, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9356854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.039981 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1095316, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.937821, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.039988 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1095316, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.937821, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.040000 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1095312, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9370136, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.040012 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1095312, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9370136, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.040037 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1095304, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9356854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.040051 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1095312, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9370136, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.040067 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095330, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.941465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.040079 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1095316, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.937821, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.040100 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1095304, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9356854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.040112 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095330, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.941465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.040123 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1095289, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9355302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:10:08.040143 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095330, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.941465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.040159 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1095304, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9356854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.040177 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095270, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9300745, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.040191 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095270, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9300745, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.040210 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095330, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.941465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.040218 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095270, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9300745, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.040225 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1095312, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9370136, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.040238 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1095353, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.94394, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.040246 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095330, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.941465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.040257 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1095353, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.94394, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.040265 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1095323, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9409459, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.040277 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095270, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9300745, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.040286 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095270, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9300745, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.040293 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1095304, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9356854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.040306 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1095319, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9389393, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:10:08.040313 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1095323, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9409459, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.040412 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095287, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.932286, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.040434 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1095353, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.94394, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.040442 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095287, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.932286, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.040449 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1095353, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.94394, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.040456 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1095274, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9314466, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.040471 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095330, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.941465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.040478 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1095353, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.94394, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.040485 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1095323, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9409459, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.040501 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1095314, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9375327, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.040508 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1095323, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9409459, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.040515 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095287, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.932286, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.040521 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1095313, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.937292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.040625 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1095274, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9314466, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.040645 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1095323, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9409459, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.040658 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095287, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.932286, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.040684 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1095350, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.94394, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.040697 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:10:08.040711 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095270, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9300745, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.040724 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1095283, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9318645, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:10:08.040737 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095287, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.932286, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.040756 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1095274, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9314466, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.040769 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1095274, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9314466, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.040780 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1095314, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9375327, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.040806 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1095274, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9314466, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.040817 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1095314, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9375327, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.040830 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1095353, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.94394, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.040837 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1095314, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9375327, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.040848 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1095314, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9375327, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.040856 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1095313, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.937292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.040867 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1095309, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9366376, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:10:08.040878 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1095313, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.937292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.040885 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1095323, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9409459, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.040892 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1095350, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.94394, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.040898 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:10:08.040905 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1095313, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.937292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.040917 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1095313, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.937292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.040924 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1095350, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.94394, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.040935 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:10:08.040942 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095287, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.932286, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.040953 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1095350, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.94394, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.040960 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:10:08.040966 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1095350, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.94394, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.040973 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:10:08.040980 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1095316, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.937821, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:10:08.040987 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1095274, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9314466, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.040997 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1095314, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9375327, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.041009 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1095312, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9370136, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:10:08.041016 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1095313, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.937292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.041026 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1095350, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.94394, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-08-29 15:10:08.041033 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:10:08.041040 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1095304, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9356854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:10:08.041047 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095330, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.941465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:10:08.041054 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095270, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9300745, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:10:08.041064 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1095353, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.94394, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:10:08.041076 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1095323, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9409459, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:10:08.041083 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1095287, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.932286, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:10:08.041097 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1095274, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9314466, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:10:08.041105 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1095314, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9375327, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:10:08.041112 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1095313, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.937292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:10:08.041118 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1095350, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.94394, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-08-29 15:10:08.041125 | orchestrator | 2025-08-29 15:10:08.041132 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-08-29 15:10:08.041145 | orchestrator | Friday 29 August 2025 15:07:35 +0000 (0:00:31.549) 0:00:56.252 ********* 2025-08-29 15:10:08.041152 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 15:10:08.041159 | orchestrator | 2025-08-29 15:10:08.041167 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-08-29 15:10:08.041178 | orchestrator | Friday 29 August 2025 15:07:36 +0000 (0:00:00.794) 0:00:57.047 ********* 2025-08-29 15:10:08.041193 | orchestrator | [WARNING]: Skipped 2025-08-29 15:10:08.041211 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 15:10:08.041224 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-08-29 15:10:08.041234 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 15:10:08.041244 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-08-29 15:10:08.041255 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 15:10:08.041265 | orchestrator | [WARNING]: Skipped 2025-08-29 15:10:08.041275 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 15:10:08.041285 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-08-29 15:10:08.041297 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 15:10:08.041307 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-08-29 15:10:08.041318 | orchestrator | [WARNING]: Skipped 2025-08-29 15:10:08.041356 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 15:10:08.041367 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-08-29 15:10:08.041379 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 15:10:08.041390 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-08-29 15:10:08.041402 | orchestrator | [WARNING]: Skipped 2025-08-29 15:10:08.041413 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 15:10:08.041425 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-08-29 15:10:08.041436 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 15:10:08.041456 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-08-29 15:10:08.041469 | orchestrator | [WARNING]: Skipped 2025-08-29 15:10:08.041480 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 15:10:08.041499 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-08-29 15:10:08.041510 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 15:10:08.041521 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-08-29 15:10:08.041531 | orchestrator | [WARNING]: Skipped 2025-08-29 15:10:08.041541 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 15:10:08.041547 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-08-29 15:10:08.041553 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 15:10:08.041560 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-08-29 15:10:08.041566 | orchestrator | [WARNING]: Skipped 2025-08-29 15:10:08.041572 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 15:10:08.041578 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-08-29 15:10:08.041584 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-08-29 15:10:08.041591 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-08-29 15:10:08.041597 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-08-29 15:10:08.041603 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 15:10:08.041609 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-08-29 15:10:08.041616 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-08-29 15:10:08.041631 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-08-29 15:10:08.041637 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-08-29 15:10:08.041644 | orchestrator | 2025-08-29 15:10:08.041650 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-08-29 15:10:08.041656 | orchestrator | Friday 29 August 2025 15:07:39 +0000 (0:00:02.965) 0:01:00.012 ********* 2025-08-29 15:10:08.041662 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-08-29 15:10:08.041669 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:10:08.041679 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-08-29 15:10:08.041695 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:10:08.041706 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-08-29 15:10:08.041716 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:10:08.041726 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-08-29 15:10:08.041738 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:10:08.041748 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-08-29 15:10:08.041758 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:10:08.041768 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-08-29 15:10:08.041779 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:10:08.041791 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-08-29 15:10:08.041803 | orchestrator | 2025-08-29 15:10:08.041814 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-08-29 15:10:08.041826 | orchestrator | Friday 29 August 2025 15:07:58 +0000 (0:00:19.520) 0:01:19.532 ********* 2025-08-29 15:10:08.041837 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-08-29 15:10:08.041847 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:10:08.041859 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-08-29 15:10:08.041895 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:10:08.041903 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-08-29 15:10:08.041918 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:10:08.041925 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-08-29 15:10:08.041931 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:10:08.041938 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-08-29 15:10:08.041944 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:10:08.041950 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-08-29 15:10:08.041957 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:10:08.041963 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-08-29 15:10:08.041969 | orchestrator | 2025-08-29 15:10:08.041976 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-08-29 15:10:08.041982 | orchestrator | Friday 29 August 2025 15:08:03 +0000 (0:00:04.152) 0:01:23.685 ********* 2025-08-29 15:10:08.041989 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-08-29 15:10:08.041998 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-08-29 15:10:08.042004 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-08-29 15:10:08.042010 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:10:08.042059 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:10:08.042068 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:10:08.042080 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-08-29 15:10:08.042087 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:10:08.042096 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-08-29 15:10:08.042107 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:10:08.042123 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-08-29 15:10:08.042135 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-08-29 15:10:08.042146 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:10:08.042157 | orchestrator | 2025-08-29 15:10:08.042166 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-08-29 15:10:08.042178 | orchestrator | Friday 29 August 2025 15:08:05 +0000 (0:00:02.733) 0:01:26.419 ********* 2025-08-29 15:10:08.042189 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 15:10:08.042201 | orchestrator | 2025-08-29 15:10:08.042212 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-08-29 15:10:08.042223 | orchestrator | Friday 29 August 2025 15:08:06 +0000 (0:00:01.171) 0:01:27.590 ********* 2025-08-29 15:10:08.042234 | orchestrator | skipping: [testbed-manager] 2025-08-29 15:10:08.042246 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:10:08.042258 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:10:08.042269 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:10:08.042279 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:10:08.042285 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:10:08.042291 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:10:08.042298 | orchestrator | 2025-08-29 15:10:08.042304 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-08-29 15:10:08.042310 | orchestrator | Friday 29 August 2025 15:08:08 +0000 (0:00:01.106) 0:01:28.697 ********* 2025-08-29 15:10:08.042316 | orchestrator | skipping: [testbed-manager] 2025-08-29 15:10:08.042339 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:10:08.042347 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:10:08.042353 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:10:08.042360 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:10:08.042366 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:10:08.042372 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:10:08.042378 | orchestrator | 2025-08-29 15:10:08.042384 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-08-29 15:10:08.042390 | orchestrator | Friday 29 August 2025 15:08:10 +0000 (0:00:02.572) 0:01:31.269 ********* 2025-08-29 15:10:08.042397 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-08-29 15:10:08.042403 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:10:08.042409 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-08-29 15:10:08.042415 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-08-29 15:10:08.042422 | orchestrator | skipping: [testbed-manager] 2025-08-29 15:10:08.042428 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:10:08.042434 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-08-29 15:10:08.042440 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:10:08.042446 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-08-29 15:10:08.042452 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:10:08.042468 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-08-29 15:10:08.042486 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:10:08.042492 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-08-29 15:10:08.042498 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:10:08.042504 | orchestrator | 2025-08-29 15:10:08.042510 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-08-29 15:10:08.042521 | orchestrator | Friday 29 August 2025 15:08:13 +0000 (0:00:02.480) 0:01:33.750 ********* 2025-08-29 15:10:08.042531 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-08-29 15:10:08.042543 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-08-29 15:10:08.042553 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:10:08.042563 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:10:08.042573 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-08-29 15:10:08.042584 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:10:08.042593 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-08-29 15:10:08.042603 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:10:08.042612 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-08-29 15:10:08.042621 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-08-29 15:10:08.042632 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:10:08.042643 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-08-29 15:10:08.042654 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:10:08.042665 | orchestrator | 2025-08-29 15:10:08.042683 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-08-29 15:10:08.042693 | orchestrator | Friday 29 August 2025 15:08:15 +0000 (0:00:02.513) 0:01:36.263 ********* 2025-08-29 15:10:08.042702 | orchestrator | [WARNING]: Skipped 2025-08-29 15:10:08.042711 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-08-29 15:10:08.042719 | orchestrator | due to this access issue: 2025-08-29 15:10:08.042730 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-08-29 15:10:08.042740 | orchestrator | not a directory 2025-08-29 15:10:08.042750 | orchestrator | ok: [testbed-manager -> localhost] 2025-08-29 15:10:08.042759 | orchestrator | 2025-08-29 15:10:08.042768 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-08-29 15:10:08.042778 | orchestrator | Friday 29 August 2025 15:08:17 +0000 (0:00:01.566) 0:01:37.830 ********* 2025-08-29 15:10:08.042787 | orchestrator | skipping: [testbed-manager] 2025-08-29 15:10:08.042796 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:10:08.042806 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:10:08.042817 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:10:08.042827 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:10:08.042835 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:10:08.042848 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:10:08.042862 | orchestrator | 2025-08-29 15:10:08.042872 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-08-29 15:10:08.042882 | orchestrator | Friday 29 August 2025 15:08:18 +0000 (0:00:01.162) 0:01:38.993 ********* 2025-08-29 15:10:08.042892 | orchestrator | skipping: [testbed-manager] 2025-08-29 15:10:08.042902 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:10:08.042912 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:10:08.042921 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:10:08.042930 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:10:08.042949 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:10:08.042959 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:10:08.042969 | orchestrator | 2025-08-29 15:10:08.042979 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-08-29 15:10:08.042988 | orchestrator | Friday 29 August 2025 15:08:19 +0000 (0:00:00.826) 0:01:39.819 ********* 2025-08-29 15:10:08.043000 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:10:08.043028 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-08-29 15:10:08.043042 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:10:08.043054 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:10:08.043071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:10:08.043084 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:10:08.043094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:10:08.043112 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:10:08.043124 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:10:08.043141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:10:08.043151 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:10:08.043160 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:10:08.043176 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:10:08.043189 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-08-29 15:10:08.043212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:10:08.043226 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:10:08.043242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:10:08.043253 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-08-29 15:10:08.043265 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:10:08.043283 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:10:08.043295 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:10:08.043314 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:10:08.043349 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 15:10:08.043361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:10:08.043379 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-08-29 15:10:08.043390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:10:08.043400 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 15:10:08.043421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-08-29 15:10:08.043488 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-08-29 15:10:08.043497 | orchestrator | 2025-08-29 15:10:08.043504 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-08-29 15:10:08.043511 | orchestrator | Friday 29 August 2025 15:08:23 +0000 (0:00:04.654) 0:01:44.473 ********* 2025-08-29 15:10:08.043518 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-08-29 15:10:08.043525 | orchestrator | skipping: [testbed-manager] 2025-08-29 15:10:08.043531 | orchestrator | 2025-08-29 15:10:08.043538 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-08-29 15:10:08.043544 | orchestrator | Friday 29 August 2025 15:08:26 +0000 (0:00:02.647) 0:01:47.121 ********* 2025-08-29 15:10:08.043550 | orchestrator | 2025-08-29 15:10:08.043557 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-08-29 15:10:08.043564 | orchestrator | Friday 29 August 2025 15:08:26 +0000 (0:00:00.107) 0:01:47.229 ********* 2025-08-29 15:10:08.043570 | orchestrator | 2025-08-29 15:10:08.043577 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-08-29 15:10:08.043583 | orchestrator | Friday 29 August 2025 15:08:26 +0000 (0:00:00.074) 0:01:47.303 ********* 2025-08-29 15:10:08.043590 | orchestrator | 2025-08-29 15:10:08.043596 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-08-29 15:10:08.043602 | orchestrator | Friday 29 August 2025 15:08:26 +0000 (0:00:00.067) 0:01:47.371 ********* 2025-08-29 15:10:08.043609 | orchestrator | 2025-08-29 15:10:08.043615 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-08-29 15:10:08.043622 | orchestrator | Friday 29 August 2025 15:08:26 +0000 (0:00:00.199) 0:01:47.570 ********* 2025-08-29 15:10:08.043628 | orchestrator | 2025-08-29 15:10:08.043634 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-08-29 15:10:08.043641 | orchestrator | Friday 29 August 2025 15:08:27 +0000 (0:00:00.079) 0:01:47.650 ********* 2025-08-29 15:10:08.043647 | orchestrator | 2025-08-29 15:10:08.043653 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-08-29 15:10:08.043660 | orchestrator | Friday 29 August 2025 15:08:27 +0000 (0:00:00.081) 0:01:47.731 ********* 2025-08-29 15:10:08.043666 | orchestrator | 2025-08-29 15:10:08.043672 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-08-29 15:10:08.043678 | orchestrator | Friday 29 August 2025 15:08:27 +0000 (0:00:00.100) 0:01:47.832 ********* 2025-08-29 15:10:08.043684 | orchestrator | changed: [testbed-manager] 2025-08-29 15:10:08.043691 | orchestrator | 2025-08-29 15:10:08.043697 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-08-29 15:10:08.043703 | orchestrator | Friday 29 August 2025 15:08:45 +0000 (0:00:18.307) 0:02:06.139 ********* 2025-08-29 15:10:08.043710 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:10:08.043716 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:10:08.043722 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:10:08.043733 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:10:08.043739 | orchestrator | changed: [testbed-manager] 2025-08-29 15:10:08.043746 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:10:08.043752 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:10:08.043758 | orchestrator | 2025-08-29 15:10:08.043764 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-08-29 15:10:08.043770 | orchestrator | Friday 29 August 2025 15:09:02 +0000 (0:00:16.975) 0:02:23.115 ********* 2025-08-29 15:10:08.043787 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:10:08.043793 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:10:08.043800 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:10:08.043812 | orchestrator | 2025-08-29 15:10:08.043819 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-08-29 15:10:08.043825 | orchestrator | Friday 29 August 2025 15:09:12 +0000 (0:00:10.318) 0:02:33.433 ********* 2025-08-29 15:10:08.043832 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:10:08.043838 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:10:08.043844 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:10:08.043850 | orchestrator | 2025-08-29 15:10:08.043856 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-08-29 15:10:08.043867 | orchestrator | Friday 29 August 2025 15:09:24 +0000 (0:00:11.567) 0:02:45.000 ********* 2025-08-29 15:10:08.043877 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:10:08.043886 | orchestrator | changed: [testbed-manager] 2025-08-29 15:10:08.043897 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:10:08.043906 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:10:08.043916 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:10:08.043925 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:10:08.043934 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:10:08.043945 | orchestrator | 2025-08-29 15:10:08.043956 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-08-29 15:10:08.043966 | orchestrator | Friday 29 August 2025 15:09:38 +0000 (0:00:14.057) 0:02:59.058 ********* 2025-08-29 15:10:08.043977 | orchestrator | changed: [testbed-manager] 2025-08-29 15:10:08.043988 | orchestrator | 2025-08-29 15:10:08.043998 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-08-29 15:10:08.044009 | orchestrator | Friday 29 August 2025 15:09:45 +0000 (0:00:06.877) 0:03:05.936 ********* 2025-08-29 15:10:08.044023 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:10:08.044030 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:10:08.044036 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:10:08.044042 | orchestrator | 2025-08-29 15:10:08.044048 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-08-29 15:10:08.044054 | orchestrator | Friday 29 August 2025 15:09:53 +0000 (0:00:07.706) 0:03:13.642 ********* 2025-08-29 15:10:08.044061 | orchestrator | changed: [testbed-manager] 2025-08-29 15:10:08.044067 | orchestrator | 2025-08-29 15:10:08.044073 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-08-29 15:10:08.044079 | orchestrator | Friday 29 August 2025 15:09:59 +0000 (0:00:06.558) 0:03:20.201 ********* 2025-08-29 15:10:08.044085 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:10:08.044091 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:10:08.044098 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:10:08.044104 | orchestrator | 2025-08-29 15:10:08.044110 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:10:08.044117 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-08-29 15:10:08.044124 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-08-29 15:10:08.044131 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-08-29 15:10:08.044137 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-08-29 15:10:08.044143 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-08-29 15:10:08.044149 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-08-29 15:10:08.044156 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-08-29 15:10:08.044169 | orchestrator | 2025-08-29 15:10:08.044176 | orchestrator | 2025-08-29 15:10:08.044183 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:10:08.044193 | orchestrator | Friday 29 August 2025 15:10:05 +0000 (0:00:06.367) 0:03:26.568 ********* 2025-08-29 15:10:08.044202 | orchestrator | =============================================================================== 2025-08-29 15:10:08.044213 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 31.55s 2025-08-29 15:10:08.044223 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 19.52s 2025-08-29 15:10:08.044234 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 18.31s 2025-08-29 15:10:08.044245 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 16.98s 2025-08-29 15:10:08.044255 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 14.06s 2025-08-29 15:10:08.044262 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 11.57s 2025-08-29 15:10:08.044269 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.32s 2025-08-29 15:10:08.044285 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 7.71s 2025-08-29 15:10:08.044295 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 6.88s 2025-08-29 15:10:08.044305 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 6.56s 2025-08-29 15:10:08.044314 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 6.37s 2025-08-29 15:10:08.044320 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.85s 2025-08-29 15:10:08.044363 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.75s 2025-08-29 15:10:08.044374 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.65s 2025-08-29 15:10:08.044384 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 4.15s 2025-08-29 15:10:08.044395 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.64s 2025-08-29 15:10:08.044404 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 2.97s 2025-08-29 15:10:08.044411 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 2.73s 2025-08-29 15:10:08.044419 | orchestrator | prometheus : Creating prometheus database user and setting permissions --- 2.65s 2025-08-29 15:10:08.044430 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.55s 2025-08-29 15:10:08.044437 | orchestrator | 2025-08-29 15:10:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:11.085037 | orchestrator | 2025-08-29 15:10:11 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:10:11.087859 | orchestrator | 2025-08-29 15:10:11 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:10:11.089236 | orchestrator | 2025-08-29 15:10:11 | INFO  | Task 4db36d32-af5b-4fe1-9154-53089ba7cf07 is in state STARTED 2025-08-29 15:10:11.090669 | orchestrator | 2025-08-29 15:10:11 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:10:11.090707 | orchestrator | 2025-08-29 15:10:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:14.136178 | orchestrator | 2025-08-29 15:10:14 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:10:14.139098 | orchestrator | 2025-08-29 15:10:14 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:10:14.141647 | orchestrator | 2025-08-29 15:10:14 | INFO  | Task 4db36d32-af5b-4fe1-9154-53089ba7cf07 is in state STARTED 2025-08-29 15:10:14.144404 | orchestrator | 2025-08-29 15:10:14 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:10:14.144608 | orchestrator | 2025-08-29 15:10:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:17.188063 | orchestrator | 2025-08-29 15:10:17 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:10:17.189044 | orchestrator | 2025-08-29 15:10:17 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:10:17.190289 | orchestrator | 2025-08-29 15:10:17 | INFO  | Task 4db36d32-af5b-4fe1-9154-53089ba7cf07 is in state STARTED 2025-08-29 15:10:17.191587 | orchestrator | 2025-08-29 15:10:17 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:10:17.191634 | orchestrator | 2025-08-29 15:10:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:20.234168 | orchestrator | 2025-08-29 15:10:20 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:10:20.235895 | orchestrator | 2025-08-29 15:10:20 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:10:20.237481 | orchestrator | 2025-08-29 15:10:20 | INFO  | Task 4db36d32-af5b-4fe1-9154-53089ba7cf07 is in state STARTED 2025-08-29 15:10:20.238908 | orchestrator | 2025-08-29 15:10:20 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:10:20.238950 | orchestrator | 2025-08-29 15:10:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:23.286605 | orchestrator | 2025-08-29 15:10:23 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:10:23.288043 | orchestrator | 2025-08-29 15:10:23 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:10:23.289575 | orchestrator | 2025-08-29 15:10:23 | INFO  | Task 4db36d32-af5b-4fe1-9154-53089ba7cf07 is in state STARTED 2025-08-29 15:10:23.290515 | orchestrator | 2025-08-29 15:10:23 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:10:23.290553 | orchestrator | 2025-08-29 15:10:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:26.337172 | orchestrator | 2025-08-29 15:10:26 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:10:26.337264 | orchestrator | 2025-08-29 15:10:26 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:10:26.338052 | orchestrator | 2025-08-29 15:10:26 | INFO  | Task 4db36d32-af5b-4fe1-9154-53089ba7cf07 is in state STARTED 2025-08-29 15:10:26.338559 | orchestrator | 2025-08-29 15:10:26 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:10:26.338760 | orchestrator | 2025-08-29 15:10:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:29.376537 | orchestrator | 2025-08-29 15:10:29 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:10:29.378565 | orchestrator | 2025-08-29 15:10:29 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:10:29.380786 | orchestrator | 2025-08-29 15:10:29 | INFO  | Task 4db36d32-af5b-4fe1-9154-53089ba7cf07 is in state STARTED 2025-08-29 15:10:29.382626 | orchestrator | 2025-08-29 15:10:29 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:10:29.382674 | orchestrator | 2025-08-29 15:10:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:32.417526 | orchestrator | 2025-08-29 15:10:32 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:10:32.417687 | orchestrator | 2025-08-29 15:10:32 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:10:32.418697 | orchestrator | 2025-08-29 15:10:32 | INFO  | Task 4db36d32-af5b-4fe1-9154-53089ba7cf07 is in state STARTED 2025-08-29 15:10:32.420996 | orchestrator | 2025-08-29 15:10:32 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:10:32.421047 | orchestrator | 2025-08-29 15:10:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:35.460102 | orchestrator | 2025-08-29 15:10:35 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:10:35.460980 | orchestrator | 2025-08-29 15:10:35 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:10:35.462127 | orchestrator | 2025-08-29 15:10:35 | INFO  | Task 4db36d32-af5b-4fe1-9154-53089ba7cf07 is in state STARTED 2025-08-29 15:10:35.463516 | orchestrator | 2025-08-29 15:10:35 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:10:35.463561 | orchestrator | 2025-08-29 15:10:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:38.503604 | orchestrator | 2025-08-29 15:10:38 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:10:38.504457 | orchestrator | 2025-08-29 15:10:38 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:10:38.505690 | orchestrator | 2025-08-29 15:10:38 | INFO  | Task 4db36d32-af5b-4fe1-9154-53089ba7cf07 is in state STARTED 2025-08-29 15:10:38.507056 | orchestrator | 2025-08-29 15:10:38 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:10:38.507098 | orchestrator | 2025-08-29 15:10:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:41.548107 | orchestrator | 2025-08-29 15:10:41 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:10:41.550571 | orchestrator | 2025-08-29 15:10:41 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:10:41.552333 | orchestrator | 2025-08-29 15:10:41 | INFO  | Task 4db36d32-af5b-4fe1-9154-53089ba7cf07 is in state STARTED 2025-08-29 15:10:41.554172 | orchestrator | 2025-08-29 15:10:41 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:10:41.554209 | orchestrator | 2025-08-29 15:10:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:44.591979 | orchestrator | 2025-08-29 15:10:44 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:10:44.592093 | orchestrator | 2025-08-29 15:10:44 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:10:44.592759 | orchestrator | 2025-08-29 15:10:44 | INFO  | Task 4db36d32-af5b-4fe1-9154-53089ba7cf07 is in state STARTED 2025-08-29 15:10:44.593515 | orchestrator | 2025-08-29 15:10:44 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:10:44.593541 | orchestrator | 2025-08-29 15:10:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:47.623589 | orchestrator | 2025-08-29 15:10:47 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:10:47.623661 | orchestrator | 2025-08-29 15:10:47 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:10:47.625259 | orchestrator | 2025-08-29 15:10:47 | INFO  | Task 4db36d32-af5b-4fe1-9154-53089ba7cf07 is in state STARTED 2025-08-29 15:10:47.625284 | orchestrator | 2025-08-29 15:10:47 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:10:47.625294 | orchestrator | 2025-08-29 15:10:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:50.654237 | orchestrator | 2025-08-29 15:10:50 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:10:50.654594 | orchestrator | 2025-08-29 15:10:50 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:10:50.655347 | orchestrator | 2025-08-29 15:10:50 | INFO  | Task 4db36d32-af5b-4fe1-9154-53089ba7cf07 is in state STARTED 2025-08-29 15:10:50.656111 | orchestrator | 2025-08-29 15:10:50 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:10:50.656136 | orchestrator | 2025-08-29 15:10:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:53.697894 | orchestrator | 2025-08-29 15:10:53 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:10:53.697998 | orchestrator | 2025-08-29 15:10:53 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:10:53.702260 | orchestrator | 2025-08-29 15:10:53 | INFO  | Task 4db36d32-af5b-4fe1-9154-53089ba7cf07 is in state STARTED 2025-08-29 15:10:53.702398 | orchestrator | 2025-08-29 15:10:53 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:10:53.702440 | orchestrator | 2025-08-29 15:10:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:56.724270 | orchestrator | 2025-08-29 15:10:56 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:10:56.724437 | orchestrator | 2025-08-29 15:10:56 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:10:56.725048 | orchestrator | 2025-08-29 15:10:56 | INFO  | Task 4db36d32-af5b-4fe1-9154-53089ba7cf07 is in state STARTED 2025-08-29 15:10:56.725888 | orchestrator | 2025-08-29 15:10:56 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:10:56.725970 | orchestrator | 2025-08-29 15:10:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:10:59.764271 | orchestrator | 2025-08-29 15:10:59 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:10:59.765363 | orchestrator | 2025-08-29 15:10:59 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:10:59.766210 | orchestrator | 2025-08-29 15:10:59 | INFO  | Task 4db36d32-af5b-4fe1-9154-53089ba7cf07 is in state STARTED 2025-08-29 15:10:59.766826 | orchestrator | 2025-08-29 15:10:59 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:10:59.766886 | orchestrator | 2025-08-29 15:10:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:02.795758 | orchestrator | 2025-08-29 15:11:02 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:11:02.797005 | orchestrator | 2025-08-29 15:11:02 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:11:02.797568 | orchestrator | 2025-08-29 15:11:02 | INFO  | Task 4db36d32-af5b-4fe1-9154-53089ba7cf07 is in state STARTED 2025-08-29 15:11:02.798369 | orchestrator | 2025-08-29 15:11:02 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:11:02.798432 | orchestrator | 2025-08-29 15:11:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:05.832779 | orchestrator | 2025-08-29 15:11:05 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:11:05.833692 | orchestrator | 2025-08-29 15:11:05 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:11:05.833726 | orchestrator | 2025-08-29 15:11:05 | INFO  | Task 4db36d32-af5b-4fe1-9154-53089ba7cf07 is in state STARTED 2025-08-29 15:11:05.834473 | orchestrator | 2025-08-29 15:11:05 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:11:05.834526 | orchestrator | 2025-08-29 15:11:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:08.868746 | orchestrator | 2025-08-29 15:11:08 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:11:08.868851 | orchestrator | 2025-08-29 15:11:08 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:11:08.871851 | orchestrator | 2025-08-29 15:11:08 | INFO  | Task 4db36d32-af5b-4fe1-9154-53089ba7cf07 is in state STARTED 2025-08-29 15:11:08.871932 | orchestrator | 2025-08-29 15:11:08 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:11:08.871949 | orchestrator | 2025-08-29 15:11:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:11.907398 | orchestrator | 2025-08-29 15:11:11 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:11:11.907867 | orchestrator | 2025-08-29 15:11:11 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state STARTED 2025-08-29 15:11:11.908597 | orchestrator | 2025-08-29 15:11:11 | INFO  | Task 4db36d32-af5b-4fe1-9154-53089ba7cf07 is in state STARTED 2025-08-29 15:11:11.909831 | orchestrator | 2025-08-29 15:11:11 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:11:11.909863 | orchestrator | 2025-08-29 15:11:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:14.994177 | orchestrator | 2025-08-29 15:11:14 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:11:14.994455 | orchestrator | 2025-08-29 15:11:14 | INFO  | Task 5898feea-50e0-4b8f-af79-0537f85d6869 is in state STARTED 2025-08-29 15:11:14.996521 | orchestrator | 2025-08-29 15:11:14 | INFO  | Task 4e94ca91-41b0-4bd2-b64d-04481c0dfd0c is in state SUCCESS 2025-08-29 15:11:14.999128 | orchestrator | 2025-08-29 15:11:14.999208 | orchestrator | 2025-08-29 15:11:14.999229 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:11:14.999248 | orchestrator | 2025-08-29 15:11:14.999266 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:11:14.999921 | orchestrator | Friday 29 August 2025 15:07:11 +0000 (0:00:00.325) 0:00:00.325 ********* 2025-08-29 15:11:14.999958 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:11:14.999979 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:11:14.999998 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:11:15.000015 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:11:15.000034 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:11:15.000053 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:11:15.000071 | orchestrator | 2025-08-29 15:11:15.000089 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:11:15.000116 | orchestrator | Friday 29 August 2025 15:07:12 +0000 (0:00:01.384) 0:00:01.710 ********* 2025-08-29 15:11:15.000681 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-08-29 15:11:15.000715 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-08-29 15:11:15.000726 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-08-29 15:11:15.000737 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-08-29 15:11:15.000748 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-08-29 15:11:15.000759 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-08-29 15:11:15.000770 | orchestrator | 2025-08-29 15:11:15.000781 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-08-29 15:11:15.000792 | orchestrator | 2025-08-29 15:11:15.000803 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-08-29 15:11:15.000814 | orchestrator | Friday 29 August 2025 15:07:13 +0000 (0:00:01.366) 0:00:03.076 ********* 2025-08-29 15:11:15.000826 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:11:15.000867 | orchestrator | 2025-08-29 15:11:15.000878 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-08-29 15:11:15.000889 | orchestrator | Friday 29 August 2025 15:07:18 +0000 (0:00:04.359) 0:00:07.438 ********* 2025-08-29 15:11:15.000901 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-08-29 15:11:15.000912 | orchestrator | 2025-08-29 15:11:15.000923 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-08-29 15:11:15.000934 | orchestrator | Friday 29 August 2025 15:07:22 +0000 (0:00:04.216) 0:00:11.654 ********* 2025-08-29 15:11:15.000946 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-08-29 15:11:15.000957 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-08-29 15:11:15.000968 | orchestrator | 2025-08-29 15:11:15.000979 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-08-29 15:11:15.000990 | orchestrator | Friday 29 August 2025 15:07:28 +0000 (0:00:06.104) 0:00:17.759 ********* 2025-08-29 15:11:15.001002 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 15:11:15.001013 | orchestrator | 2025-08-29 15:11:15.001024 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-08-29 15:11:15.001035 | orchestrator | Friday 29 August 2025 15:07:31 +0000 (0:00:02.906) 0:00:20.665 ********* 2025-08-29 15:11:15.001046 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 15:11:15.001057 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-08-29 15:11:15.001068 | orchestrator | 2025-08-29 15:11:15.001079 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-08-29 15:11:15.001090 | orchestrator | Friday 29 August 2025 15:07:34 +0000 (0:00:03.320) 0:00:23.986 ********* 2025-08-29 15:11:15.001101 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 15:11:15.001112 | orchestrator | 2025-08-29 15:11:15.001123 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-08-29 15:11:15.001134 | orchestrator | Friday 29 August 2025 15:07:38 +0000 (0:00:03.593) 0:00:27.580 ********* 2025-08-29 15:11:15.001148 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-08-29 15:11:15.001166 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-08-29 15:11:15.001183 | orchestrator | 2025-08-29 15:11:15.001201 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-08-29 15:11:15.001219 | orchestrator | Friday 29 August 2025 15:07:46 +0000 (0:00:08.145) 0:00:35.725 ********* 2025-08-29 15:11:15.001242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 15:11:15.001377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 15:11:15.001410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:15.001426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:15.001439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 15:11:15.001453 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:15.001508 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:15.001531 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:15.001546 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:15.001559 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:15.001580 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:15.001602 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:15.001622 | orchestrator | 2025-08-29 15:11:15.001686 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-08-29 15:11:15.001719 | orchestrator | Friday 29 August 2025 15:07:49 +0000 (0:00:02.858) 0:00:38.584 ********* 2025-08-29 15:11:15.001747 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:11:15.001767 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:11:15.001785 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:11:15.001804 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:11:15.001816 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:11:15.001826 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:11:15.001837 | orchestrator | 2025-08-29 15:11:15.001848 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-08-29 15:11:15.001859 | orchestrator | Friday 29 August 2025 15:07:49 +0000 (0:00:00.532) 0:00:39.117 ********* 2025-08-29 15:11:15.001870 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:11:15.001880 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:11:15.001891 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:11:15.001902 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:11:15.001913 | orchestrator | 2025-08-29 15:11:15.001923 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-08-29 15:11:15.001934 | orchestrator | Friday 29 August 2025 15:07:51 +0000 (0:00:01.167) 0:00:40.285 ********* 2025-08-29 15:11:15.001945 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-08-29 15:11:15.001956 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-08-29 15:11:15.001967 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-08-29 15:11:15.001977 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-08-29 15:11:15.001988 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-08-29 15:11:15.001999 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-08-29 15:11:15.002009 | orchestrator | 2025-08-29 15:11:15.002086 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-08-29 15:11:15.002101 | orchestrator | Friday 29 August 2025 15:07:53 +0000 (0:00:02.197) 0:00:42.482 ********* 2025-08-29 15:11:15.002115 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-08-29 15:11:15.002129 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-08-29 15:11:15.002142 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-08-29 15:11:15.002212 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-08-29 15:11:15.002227 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-08-29 15:11:15.002239 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-08-29 15:11:15.002250 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-08-29 15:11:15.002264 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-08-29 15:11:15.002405 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-08-29 15:11:15.002424 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-08-29 15:11:15.002438 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-08-29 15:11:15.002450 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-08-29 15:11:15.002468 | orchestrator | 2025-08-29 15:11:15.002480 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-08-29 15:11:15.002491 | orchestrator | Friday 29 August 2025 15:07:57 +0000 (0:00:03.835) 0:00:46.318 ********* 2025-08-29 15:11:15.002502 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 15:11:15.002514 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 15:11:15.002525 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-08-29 15:11:15.002535 | orchestrator | 2025-08-29 15:11:15.002546 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-08-29 15:11:15.002557 | orchestrator | Friday 29 August 2025 15:07:59 +0000 (0:00:02.400) 0:00:48.719 ********* 2025-08-29 15:11:15.002568 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-08-29 15:11:15.002578 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-08-29 15:11:15.002589 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-08-29 15:11:15.002600 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-08-29 15:11:15.002610 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-08-29 15:11:15.002651 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-08-29 15:11:15.002664 | orchestrator | 2025-08-29 15:11:15.002675 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-08-29 15:11:15.002699 | orchestrator | Friday 29 August 2025 15:08:03 +0000 (0:00:03.697) 0:00:52.416 ********* 2025-08-29 15:11:15.002710 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-08-29 15:11:15.002721 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-08-29 15:11:15.002735 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-08-29 15:11:15.002754 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-08-29 15:11:15.002773 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-08-29 15:11:15.002790 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-08-29 15:11:15.002808 | orchestrator | 2025-08-29 15:11:15.002824 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-08-29 15:11:15.002841 | orchestrator | Friday 29 August 2025 15:08:04 +0000 (0:00:01.435) 0:00:53.852 ********* 2025-08-29 15:11:15.002857 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:11:15.002876 | orchestrator | 2025-08-29 15:11:15.002893 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-08-29 15:11:15.002914 | orchestrator | Friday 29 August 2025 15:08:04 +0000 (0:00:00.283) 0:00:54.136 ********* 2025-08-29 15:11:15.002931 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:11:15.002942 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:11:15.002951 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:11:15.002961 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:11:15.002971 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:11:15.002980 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:11:15.002990 | orchestrator | 2025-08-29 15:11:15.002999 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-08-29 15:11:15.003009 | orchestrator | Friday 29 August 2025 15:08:05 +0000 (0:00:00.865) 0:00:55.001 ********* 2025-08-29 15:11:15.003020 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:11:15.003031 | orchestrator | 2025-08-29 15:11:15.003040 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-08-29 15:11:15.003050 | orchestrator | Friday 29 August 2025 15:08:07 +0000 (0:00:01.480) 0:00:56.481 ********* 2025-08-29 15:11:15.003060 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 15:11:15.003082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 15:11:15.003130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 15:11:15.003149 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:15.003159 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:15.003178 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:15.003189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:15.003199 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:15.003248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:15.003261 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:15.003271 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:15.003317 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:15.003328 | orchestrator | 2025-08-29 15:11:15.003337 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-08-29 15:11:15.003351 | orchestrator | Friday 29 August 2025 15:08:10 +0000 (0:00:03.535) 0:01:00.017 ********* 2025-08-29 15:11:15.003368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 15:11:15.003390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:11:15.003407 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:11:15.003431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 15:11:15.003450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:11:15.003478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 15:11:15.003493 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:11:15.003505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:11:15.003515 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:11:15.003525 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 15:11:15.003551 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 15:11:15.003562 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:11:15.003572 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 15:11:15.003590 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 15:11:15.003600 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:11:15.003610 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 15:11:15.003621 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 15:11:15.003631 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:11:15.003641 | orchestrator | 2025-08-29 15:11:15.003658 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-08-29 15:11:15.003675 | orchestrator | Friday 29 August 2025 15:08:13 +0000 (0:00:02.264) 0:01:02.282 ********* 2025-08-29 15:11:15.003707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 15:11:15.003743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:11:15.003764 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:11:15.003782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 15:11:15.003803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:11:15.003827 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:11:15.003849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 15:11:15.003886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:11:15.003903 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:11:15.003919 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 15:11:15.003946 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 15:11:15.003962 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:11:15.003978 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 15:11:15.003995 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 15:11:15.004012 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:11:15.004047 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 15:11:15.004083 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 15:11:15.004100 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:11:15.004115 | orchestrator | 2025-08-29 15:11:15.004139 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-08-29 15:11:15.004159 | orchestrator | Friday 29 August 2025 15:08:15 +0000 (0:00:02.881) 0:01:05.163 ********* 2025-08-29 15:11:15.004175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 15:11:15.004192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 15:11:15.004208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 15:11:15.004239 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:15.004266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:15.004319 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:15.004337 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:15.004352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:15.004370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:15.004418 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:15.004438 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:15.004453 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:15.004469 | orchestrator | 2025-08-29 15:11:15.004485 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-08-29 15:11:15.004500 | orchestrator | Friday 29 August 2025 15:08:19 +0000 (0:00:03.702) 0:01:08.867 ********* 2025-08-29 15:11:15.004515 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-08-29 15:11:15.004531 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:11:15.004547 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-08-29 15:11:15.004562 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:11:15.004577 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-08-29 15:11:15.004594 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-08-29 15:11:15.004609 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-08-29 15:11:15.004624 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-08-29 15:11:15.004640 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:11:15.004655 | orchestrator | 2025-08-29 15:11:15.004671 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-08-29 15:11:15.004687 | orchestrator | Friday 29 August 2025 15:08:22 +0000 (0:00:02.883) 0:01:11.751 ********* 2025-08-29 15:11:15.004703 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:15.004758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 15:11:15.004782 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 15:11:15.004799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 15:11:15.004816 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:15.004850 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:15.004875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:15.004893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:15.004910 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:15.004928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:15.004952 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:15.004983 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:15.004999 | orchestrator | 2025-08-29 15:11:15.005014 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-08-29 15:11:15.005029 | orchestrator | Friday 29 August 2025 15:08:32 +0000 (0:00:09.556) 0:01:21.307 ********* 2025-08-29 15:11:15.005053 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:11:15.005067 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:11:15.005082 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:11:15.005098 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:11:15.005115 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:11:15.005138 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:11:15.005154 | orchestrator | 2025-08-29 15:11:15.005171 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-08-29 15:11:15.005187 | orchestrator | Friday 29 August 2025 15:08:34 +0000 (0:00:02.049) 0:01:23.357 ********* 2025-08-29 15:11:15.005205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 15:11:15.005221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:11:15.005236 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:11:15.005253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 15:11:15.005315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:11:15.005340 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:11:15.005367 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 15:11:15.005399 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 15:11:15.005421 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:11:15.005437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-08-29 15:11:15.005453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:11:15.005488 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:11:15.005506 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 15:11:15.005524 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 15:11:15.005541 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:11:15.005570 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-08-29 15:11:15.005588 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-08-29 15:11:15.005605 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:11:15.005621 | orchestrator | 2025-08-29 15:11:15.005637 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-08-29 15:11:15.005653 | orchestrator | Friday 29 August 2025 15:08:35 +0000 (0:00:01.365) 0:01:24.723 ********* 2025-08-29 15:11:15.005668 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:11:15.005684 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:11:15.005700 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:11:15.005716 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:11:15.005732 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:11:15.005748 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:11:15.005775 | orchestrator | 2025-08-29 15:11:15.005792 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-08-29 15:11:15.005808 | orchestrator | Friday 29 August 2025 15:08:36 +0000 (0:00:00.543) 0:01:25.266 ********* 2025-08-29 15:11:15.005827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 15:11:15.005901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 15:11:15.005940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-08-29 15:11:15.005960 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:15.005978 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:15.006009 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:15.006128 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:15.006172 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:15.006190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:15.006207 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:15.006234 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:15.006249 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-08-29 15:11:15.006265 | orchestrator | 2025-08-29 15:11:15.006365 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-08-29 15:11:15.006387 | orchestrator | Friday 29 August 2025 15:08:38 +0000 (0:00:02.629) 0:01:27.896 ********* 2025-08-29 15:11:15.006404 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:11:15.006420 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:11:15.006434 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:11:15.006444 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:11:15.006454 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:11:15.006463 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:11:15.006472 | orchestrator | 2025-08-29 15:11:15.006482 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-08-29 15:11:15.006492 | orchestrator | Friday 29 August 2025 15:08:39 +0000 (0:00:00.509) 0:01:28.405 ********* 2025-08-29 15:11:15.006501 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:11:15.006511 | orchestrator | 2025-08-29 15:11:15.006520 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-08-29 15:11:15.006530 | orchestrator | Friday 29 August 2025 15:08:41 +0000 (0:00:02.567) 0:01:30.973 ********* 2025-08-29 15:11:15.006543 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:11:15.006559 | orchestrator | 2025-08-29 15:11:15.006575 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-08-29 15:11:15.006594 | orchestrator | Friday 29 August 2025 15:08:43 +0000 (0:00:02.094) 0:01:33.068 ********* 2025-08-29 15:11:15.006607 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:11:15.006620 | orchestrator | 2025-08-29 15:11:15.006633 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-08-29 15:11:15.006644 | orchestrator | Friday 29 August 2025 15:09:03 +0000 (0:00:20.097) 0:01:53.165 ********* 2025-08-29 15:11:15.006655 | orchestrator | 2025-08-29 15:11:15.006678 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-08-29 15:11:15.006692 | orchestrator | Friday 29 August 2025 15:09:03 +0000 (0:00:00.071) 0:01:53.237 ********* 2025-08-29 15:11:15.006712 | orchestrator | 2025-08-29 15:11:15.006736 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-08-29 15:11:15.006750 | orchestrator | Friday 29 August 2025 15:09:04 +0000 (0:00:00.073) 0:01:53.311 ********* 2025-08-29 15:11:15.006763 | orchestrator | 2025-08-29 15:11:15.006774 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-08-29 15:11:15.006787 | orchestrator | Friday 29 August 2025 15:09:04 +0000 (0:00:00.071) 0:01:53.383 ********* 2025-08-29 15:11:15.006800 | orchestrator | 2025-08-29 15:11:15.006825 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-08-29 15:11:15.006839 | orchestrator | Friday 29 August 2025 15:09:04 +0000 (0:00:00.067) 0:01:53.450 ********* 2025-08-29 15:11:15.006852 | orchestrator | 2025-08-29 15:11:15.006865 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-08-29 15:11:15.006885 | orchestrator | Friday 29 August 2025 15:09:04 +0000 (0:00:00.067) 0:01:53.517 ********* 2025-08-29 15:11:15.006899 | orchestrator | 2025-08-29 15:11:15.006912 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-08-29 15:11:15.006924 | orchestrator | Friday 29 August 2025 15:09:04 +0000 (0:00:00.072) 0:01:53.590 ********* 2025-08-29 15:11:15.006936 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:11:15.006948 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:11:15.006961 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:11:15.006976 | orchestrator | 2025-08-29 15:11:15.006989 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-08-29 15:11:15.007003 | orchestrator | Friday 29 August 2025 15:09:30 +0000 (0:00:25.893) 0:02:19.483 ********* 2025-08-29 15:11:15.007015 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:11:15.007026 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:11:15.007034 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:11:15.007042 | orchestrator | 2025-08-29 15:11:15.007050 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-08-29 15:11:15.007057 | orchestrator | Friday 29 August 2025 15:09:42 +0000 (0:00:12.718) 0:02:32.201 ********* 2025-08-29 15:11:15.007065 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:11:15.007073 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:11:15.007081 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:11:15.007089 | orchestrator | 2025-08-29 15:11:15.007102 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-08-29 15:11:15.007111 | orchestrator | Friday 29 August 2025 15:10:59 +0000 (0:01:16.087) 0:03:48.288 ********* 2025-08-29 15:11:15.007119 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:11:15.007127 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:11:15.007135 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:11:15.007142 | orchestrator | 2025-08-29 15:11:15.007150 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-08-29 15:11:15.007162 | orchestrator | Friday 29 August 2025 15:11:12 +0000 (0:00:13.234) 0:04:01.523 ********* 2025-08-29 15:11:15.007175 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:11:15.007186 | orchestrator | 2025-08-29 15:11:15.007198 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:11:15.007212 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-08-29 15:11:15.007227 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-08-29 15:11:15.007239 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-08-29 15:11:15.007253 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-08-29 15:11:15.007266 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-08-29 15:11:15.007301 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-08-29 15:11:15.007315 | orchestrator | 2025-08-29 15:11:15.007348 | orchestrator | 2025-08-29 15:11:15.007379 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:11:15.007391 | orchestrator | Friday 29 August 2025 15:11:13 +0000 (0:00:00.828) 0:04:02.351 ********* 2025-08-29 15:11:15.007416 | orchestrator | =============================================================================== 2025-08-29 15:11:15.007427 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 76.09s 2025-08-29 15:11:15.007439 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 25.89s 2025-08-29 15:11:15.007453 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 20.10s 2025-08-29 15:11:15.007465 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 13.23s 2025-08-29 15:11:15.007477 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 12.72s 2025-08-29 15:11:15.007490 | orchestrator | cinder : Copying over cinder.conf --------------------------------------- 9.56s 2025-08-29 15:11:15.007504 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 8.15s 2025-08-29 15:11:15.007518 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.10s 2025-08-29 15:11:15.007546 | orchestrator | cinder : include_tasks -------------------------------------------------- 4.36s 2025-08-29 15:11:15.007566 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 4.22s 2025-08-29 15:11:15.007588 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.84s 2025-08-29 15:11:15.007602 | orchestrator | cinder : Copying over config.json files for services -------------------- 3.70s 2025-08-29 15:11:15.007616 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.70s 2025-08-29 15:11:15.007630 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.59s 2025-08-29 15:11:15.007640 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.54s 2025-08-29 15:11:15.007648 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.32s 2025-08-29 15:11:15.007656 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 2.91s 2025-08-29 15:11:15.007664 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 2.88s 2025-08-29 15:11:15.007671 | orchestrator | service-cert-copy : cinder | Copying over backend internal TLS key ------ 2.88s 2025-08-29 15:11:15.007679 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.86s 2025-08-29 15:11:15.007687 | orchestrator | 2025-08-29 15:11:14 | INFO  | Task 4db36d32-af5b-4fe1-9154-53089ba7cf07 is in state STARTED 2025-08-29 15:11:15.007695 | orchestrator | 2025-08-29 15:11:14 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:11:15.007703 | orchestrator | 2025-08-29 15:11:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:18.029680 | orchestrator | 2025-08-29 15:11:18 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:11:18.029798 | orchestrator | 2025-08-29 15:11:18 | INFO  | Task 5898feea-50e0-4b8f-af79-0537f85d6869 is in state STARTED 2025-08-29 15:11:18.030699 | orchestrator | 2025-08-29 15:11:18 | INFO  | Task 4db36d32-af5b-4fe1-9154-53089ba7cf07 is in state STARTED 2025-08-29 15:11:18.031184 | orchestrator | 2025-08-29 15:11:18 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:11:18.031221 | orchestrator | 2025-08-29 15:11:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:21.067442 | orchestrator | 2025-08-29 15:11:21 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:11:21.067556 | orchestrator | 2025-08-29 15:11:21 | INFO  | Task 5898feea-50e0-4b8f-af79-0537f85d6869 is in state STARTED 2025-08-29 15:11:21.068042 | orchestrator | 2025-08-29 15:11:21 | INFO  | Task 4db36d32-af5b-4fe1-9154-53089ba7cf07 is in state STARTED 2025-08-29 15:11:21.068815 | orchestrator | 2025-08-29 15:11:21 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:11:21.068839 | orchestrator | 2025-08-29 15:11:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:24.092491 | orchestrator | 2025-08-29 15:11:24 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:11:24.092578 | orchestrator | 2025-08-29 15:11:24 | INFO  | Task 5898feea-50e0-4b8f-af79-0537f85d6869 is in state STARTED 2025-08-29 15:11:24.092600 | orchestrator | 2025-08-29 15:11:24 | INFO  | Task 4db36d32-af5b-4fe1-9154-53089ba7cf07 is in state STARTED 2025-08-29 15:11:24.094776 | orchestrator | 2025-08-29 15:11:24 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:11:24.094837 | orchestrator | 2025-08-29 15:11:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:27.124019 | orchestrator | 2025-08-29 15:11:27 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:11:27.124377 | orchestrator | 2025-08-29 15:11:27 | INFO  | Task 5898feea-50e0-4b8f-af79-0537f85d6869 is in state STARTED 2025-08-29 15:11:27.125051 | orchestrator | 2025-08-29 15:11:27 | INFO  | Task 4db36d32-af5b-4fe1-9154-53089ba7cf07 is in state STARTED 2025-08-29 15:11:27.125580 | orchestrator | 2025-08-29 15:11:27 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:11:27.125601 | orchestrator | 2025-08-29 15:11:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:30.157217 | orchestrator | 2025-08-29 15:11:30 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:11:30.157396 | orchestrator | 2025-08-29 15:11:30 | INFO  | Task 5898feea-50e0-4b8f-af79-0537f85d6869 is in state STARTED 2025-08-29 15:11:30.158127 | orchestrator | 2025-08-29 15:11:30 | INFO  | Task 4db36d32-af5b-4fe1-9154-53089ba7cf07 is in state STARTED 2025-08-29 15:11:30.159065 | orchestrator | 2025-08-29 15:11:30 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:11:30.159141 | orchestrator | 2025-08-29 15:11:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:33.184814 | orchestrator | 2025-08-29 15:11:33 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:11:33.186494 | orchestrator | 2025-08-29 15:11:33 | INFO  | Task 5898feea-50e0-4b8f-af79-0537f85d6869 is in state STARTED 2025-08-29 15:11:33.189407 | orchestrator | 2025-08-29 15:11:33 | INFO  | Task 4db36d32-af5b-4fe1-9154-53089ba7cf07 is in state STARTED 2025-08-29 15:11:33.190544 | orchestrator | 2025-08-29 15:11:33 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:11:33.190717 | orchestrator | 2025-08-29 15:11:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:36.221367 | orchestrator | 2025-08-29 15:11:36 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:11:36.221461 | orchestrator | 2025-08-29 15:11:36 | INFO  | Task 5898feea-50e0-4b8f-af79-0537f85d6869 is in state STARTED 2025-08-29 15:11:36.222359 | orchestrator | 2025-08-29 15:11:36 | INFO  | Task 4db36d32-af5b-4fe1-9154-53089ba7cf07 is in state STARTED 2025-08-29 15:11:36.223763 | orchestrator | 2025-08-29 15:11:36 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:11:36.223826 | orchestrator | 2025-08-29 15:11:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:39.265051 | orchestrator | 2025-08-29 15:11:39 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:11:39.267875 | orchestrator | 2025-08-29 15:11:39 | INFO  | Task 5898feea-50e0-4b8f-af79-0537f85d6869 is in state STARTED 2025-08-29 15:11:39.270546 | orchestrator | 2025-08-29 15:11:39 | INFO  | Task 4db36d32-af5b-4fe1-9154-53089ba7cf07 is in state STARTED 2025-08-29 15:11:39.272822 | orchestrator | 2025-08-29 15:11:39 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:11:39.272864 | orchestrator | 2025-08-29 15:11:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:42.300846 | orchestrator | 2025-08-29 15:11:42 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:11:42.302859 | orchestrator | 2025-08-29 15:11:42 | INFO  | Task 5898feea-50e0-4b8f-af79-0537f85d6869 is in state STARTED 2025-08-29 15:11:42.303707 | orchestrator | 2025-08-29 15:11:42 | INFO  | Task 4db36d32-af5b-4fe1-9154-53089ba7cf07 is in state STARTED 2025-08-29 15:11:42.304362 | orchestrator | 2025-08-29 15:11:42 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:11:42.304402 | orchestrator | 2025-08-29 15:11:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:45.348039 | orchestrator | 2025-08-29 15:11:45 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:11:45.348169 | orchestrator | 2025-08-29 15:11:45 | INFO  | Task 5898feea-50e0-4b8f-af79-0537f85d6869 is in state STARTED 2025-08-29 15:11:45.348984 | orchestrator | 2025-08-29 15:11:45 | INFO  | Task 4db36d32-af5b-4fe1-9154-53089ba7cf07 is in state STARTED 2025-08-29 15:11:45.349753 | orchestrator | 2025-08-29 15:11:45 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:11:45.349947 | orchestrator | 2025-08-29 15:11:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:48.421492 | orchestrator | 2025-08-29 15:11:48 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:11:48.421581 | orchestrator | 2025-08-29 15:11:48 | INFO  | Task 5898feea-50e0-4b8f-af79-0537f85d6869 is in state STARTED 2025-08-29 15:11:48.422128 | orchestrator | 2025-08-29 15:11:48 | INFO  | Task 4db36d32-af5b-4fe1-9154-53089ba7cf07 is in state STARTED 2025-08-29 15:11:48.422653 | orchestrator | 2025-08-29 15:11:48 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:11:48.422727 | orchestrator | 2025-08-29 15:11:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:51.449135 | orchestrator | 2025-08-29 15:11:51 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:11:51.449901 | orchestrator | 2025-08-29 15:11:51 | INFO  | Task 5898feea-50e0-4b8f-af79-0537f85d6869 is in state STARTED 2025-08-29 15:11:51.450567 | orchestrator | 2025-08-29 15:11:51 | INFO  | Task 4db36d32-af5b-4fe1-9154-53089ba7cf07 is in state STARTED 2025-08-29 15:11:51.451018 | orchestrator | 2025-08-29 15:11:51 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:11:51.451057 | orchestrator | 2025-08-29 15:11:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:54.473729 | orchestrator | 2025-08-29 15:11:54 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:11:54.473976 | orchestrator | 2025-08-29 15:11:54 | INFO  | Task 5898feea-50e0-4b8f-af79-0537f85d6869 is in state STARTED 2025-08-29 15:11:54.474471 | orchestrator | 2025-08-29 15:11:54 | INFO  | Task 4db36d32-af5b-4fe1-9154-53089ba7cf07 is in state STARTED 2025-08-29 15:11:54.478492 | orchestrator | 2025-08-29 15:11:54 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:11:54.478594 | orchestrator | 2025-08-29 15:11:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:11:57.523508 | orchestrator | 2025-08-29 15:11:57 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:11:57.523853 | orchestrator | 2025-08-29 15:11:57 | INFO  | Task 5898feea-50e0-4b8f-af79-0537f85d6869 is in state STARTED 2025-08-29 15:11:57.524346 | orchestrator | 2025-08-29 15:11:57 | INFO  | Task 4db36d32-af5b-4fe1-9154-53089ba7cf07 is in state STARTED 2025-08-29 15:11:57.524992 | orchestrator | 2025-08-29 15:11:57 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:11:57.525045 | orchestrator | 2025-08-29 15:11:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:00.549313 | orchestrator | 2025-08-29 15:12:00 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:12:00.549562 | orchestrator | 2025-08-29 15:12:00 | INFO  | Task 5898feea-50e0-4b8f-af79-0537f85d6869 is in state STARTED 2025-08-29 15:12:00.550216 | orchestrator | 2025-08-29 15:12:00 | INFO  | Task 4db36d32-af5b-4fe1-9154-53089ba7cf07 is in state STARTED 2025-08-29 15:12:00.550939 | orchestrator | 2025-08-29 15:12:00 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:12:00.550973 | orchestrator | 2025-08-29 15:12:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:03.586886 | orchestrator | 2025-08-29 15:12:03 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:12:03.587404 | orchestrator | 2025-08-29 15:12:03 | INFO  | Task 5898feea-50e0-4b8f-af79-0537f85d6869 is in state STARTED 2025-08-29 15:12:03.588147 | orchestrator | 2025-08-29 15:12:03 | INFO  | Task 4db36d32-af5b-4fe1-9154-53089ba7cf07 is in state STARTED 2025-08-29 15:12:03.589190 | orchestrator | 2025-08-29 15:12:03 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:12:03.589218 | orchestrator | 2025-08-29 15:12:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:06.623636 | orchestrator | 2025-08-29 15:12:06 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:12:06.623740 | orchestrator | 2025-08-29 15:12:06 | INFO  | Task 5898feea-50e0-4b8f-af79-0537f85d6869 is in state STARTED 2025-08-29 15:12:06.624173 | orchestrator | 2025-08-29 15:12:06 | INFO  | Task 4db36d32-af5b-4fe1-9154-53089ba7cf07 is in state STARTED 2025-08-29 15:12:06.624878 | orchestrator | 2025-08-29 15:12:06 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:12:06.624912 | orchestrator | 2025-08-29 15:12:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:09.651611 | orchestrator | 2025-08-29 15:12:09 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:12:09.652309 | orchestrator | 2025-08-29 15:12:09 | INFO  | Task 5898feea-50e0-4b8f-af79-0537f85d6869 is in state STARTED 2025-08-29 15:12:09.653144 | orchestrator | 2025-08-29 15:12:09 | INFO  | Task 4db36d32-af5b-4fe1-9154-53089ba7cf07 is in state STARTED 2025-08-29 15:12:09.654617 | orchestrator | 2025-08-29 15:12:09 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:12:09.654664 | orchestrator | 2025-08-29 15:12:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:12.682341 | orchestrator | 2025-08-29 15:12:12 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:12:12.682660 | orchestrator | 2025-08-29 15:12:12 | INFO  | Task 5898feea-50e0-4b8f-af79-0537f85d6869 is in state STARTED 2025-08-29 15:12:12.683533 | orchestrator | 2025-08-29 15:12:12 | INFO  | Task 4db36d32-af5b-4fe1-9154-53089ba7cf07 is in state STARTED 2025-08-29 15:12:12.684261 | orchestrator | 2025-08-29 15:12:12 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:12:12.684299 | orchestrator | 2025-08-29 15:12:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:15.721472 | orchestrator | 2025-08-29 15:12:15 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:12:15.721950 | orchestrator | 2025-08-29 15:12:15 | INFO  | Task 5898feea-50e0-4b8f-af79-0537f85d6869 is in state STARTED 2025-08-29 15:12:15.722783 | orchestrator | 2025-08-29 15:12:15 | INFO  | Task 4db36d32-af5b-4fe1-9154-53089ba7cf07 is in state STARTED 2025-08-29 15:12:15.723208 | orchestrator | 2025-08-29 15:12:15 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:12:15.723272 | orchestrator | 2025-08-29 15:12:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:18.745774 | orchestrator | 2025-08-29 15:12:18 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:12:18.745921 | orchestrator | 2025-08-29 15:12:18 | INFO  | Task d8f0f777-340d-46e4-a88d-4a3fc5176299 is in state STARTED 2025-08-29 15:12:18.747051 | orchestrator | 2025-08-29 15:12:18 | INFO  | Task 5898feea-50e0-4b8f-af79-0537f85d6869 is in state STARTED 2025-08-29 15:12:18.748455 | orchestrator | 2025-08-29 15:12:18 | INFO  | Task 4db36d32-af5b-4fe1-9154-53089ba7cf07 is in state SUCCESS 2025-08-29 15:12:18.749600 | orchestrator | 2025-08-29 15:12:18.749635 | orchestrator | 2025-08-29 15:12:18.749642 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:12:18.749647 | orchestrator | 2025-08-29 15:12:18.749651 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:12:18.749656 | orchestrator | Friday 29 August 2025 15:10:10 +0000 (0:00:00.317) 0:00:00.317 ********* 2025-08-29 15:12:18.749660 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:12:18.749665 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:12:18.749669 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:12:18.749673 | orchestrator | 2025-08-29 15:12:18.749677 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:12:18.749681 | orchestrator | Friday 29 August 2025 15:10:10 +0000 (0:00:00.345) 0:00:00.663 ********* 2025-08-29 15:12:18.749685 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-08-29 15:12:18.749690 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-08-29 15:12:18.749694 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-08-29 15:12:18.749700 | orchestrator | 2025-08-29 15:12:18.749706 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-08-29 15:12:18.749711 | orchestrator | 2025-08-29 15:12:18.749716 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-08-29 15:12:18.749722 | orchestrator | Friday 29 August 2025 15:10:11 +0000 (0:00:00.525) 0:00:01.189 ********* 2025-08-29 15:12:18.749728 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:12:18.749734 | orchestrator | 2025-08-29 15:12:18.749740 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-08-29 15:12:18.749746 | orchestrator | Friday 29 August 2025 15:10:12 +0000 (0:00:00.545) 0:00:01.735 ********* 2025-08-29 15:12:18.749753 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-08-29 15:12:18.749760 | orchestrator | 2025-08-29 15:12:18.749766 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-08-29 15:12:18.749841 | orchestrator | Friday 29 August 2025 15:10:15 +0000 (0:00:03.340) 0:00:05.075 ********* 2025-08-29 15:12:18.749849 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-08-29 15:12:18.749856 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-08-29 15:12:18.749863 | orchestrator | 2025-08-29 15:12:18.749870 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-08-29 15:12:18.749877 | orchestrator | Friday 29 August 2025 15:10:21 +0000 (0:00:06.583) 0:00:11.659 ********* 2025-08-29 15:12:18.750057 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 15:12:18.750066 | orchestrator | 2025-08-29 15:12:18.750070 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-08-29 15:12:18.750074 | orchestrator | Friday 29 August 2025 15:10:25 +0000 (0:00:03.481) 0:00:15.140 ********* 2025-08-29 15:12:18.750078 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 15:12:18.750083 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-08-29 15:12:18.750087 | orchestrator | 2025-08-29 15:12:18.750091 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-08-29 15:12:18.750095 | orchestrator | Friday 29 August 2025 15:10:29 +0000 (0:00:04.173) 0:00:19.314 ********* 2025-08-29 15:12:18.750099 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 15:12:18.750103 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-08-29 15:12:18.750107 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-08-29 15:12:18.750111 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-08-29 15:12:18.750114 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-08-29 15:12:18.750118 | orchestrator | 2025-08-29 15:12:18.750122 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-08-29 15:12:18.750126 | orchestrator | Friday 29 August 2025 15:10:46 +0000 (0:00:16.848) 0:00:36.163 ********* 2025-08-29 15:12:18.750130 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-08-29 15:12:18.750133 | orchestrator | 2025-08-29 15:12:18.750137 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-08-29 15:12:18.750141 | orchestrator | Friday 29 August 2025 15:10:50 +0000 (0:00:04.451) 0:00:40.614 ********* 2025-08-29 15:12:18.750159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 15:12:18.750175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 15:12:18.750180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 15:12:18.750189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 15:12:18.750195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 15:12:18.750202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 15:12:18.750213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:12:18.750219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:12:18.750223 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:12:18.750285 | orchestrator | 2025-08-29 15:12:18.750290 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-08-29 15:12:18.750294 | orchestrator | Friday 29 August 2025 15:10:53 +0000 (0:00:02.298) 0:00:42.913 ********* 2025-08-29 15:12:18.750298 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-08-29 15:12:18.750302 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-08-29 15:12:18.750306 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-08-29 15:12:18.750354 | orchestrator | 2025-08-29 15:12:18.750359 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-08-29 15:12:18.750363 | orchestrator | Friday 29 August 2025 15:10:55 +0000 (0:00:02.088) 0:00:45.001 ********* 2025-08-29 15:12:18.750367 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:12:18.750371 | orchestrator | 2025-08-29 15:12:18.750375 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-08-29 15:12:18.750378 | orchestrator | Friday 29 August 2025 15:10:55 +0000 (0:00:00.365) 0:00:45.366 ********* 2025-08-29 15:12:18.750382 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:12:18.750386 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:12:18.750390 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:12:18.750393 | orchestrator | 2025-08-29 15:12:18.750397 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-08-29 15:12:18.750401 | orchestrator | Friday 29 August 2025 15:10:56 +0000 (0:00:01.164) 0:00:46.531 ********* 2025-08-29 15:12:18.750405 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:12:18.750409 | orchestrator | 2025-08-29 15:12:18.750413 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-08-29 15:12:18.750417 | orchestrator | Friday 29 August 2025 15:10:58 +0000 (0:00:01.558) 0:00:48.090 ********* 2025-08-29 15:12:18.750425 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 15:12:18.750435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 15:12:18.750443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 15:12:18.750447 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 15:12:18.750452 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 15:12:18.750491 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 15:12:18.750501 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:12:18.750514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:12:18.750537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:12:18.750544 | orchestrator | 2025-08-29 15:12:18.750550 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-08-29 15:12:18.750557 | orchestrator | Friday 29 August 2025 15:11:02 +0000 (0:00:04.613) 0:00:52.704 ********* 2025-08-29 15:12:18.750563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 15:12:18.750570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 15:12:18.750579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:12:18.750585 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:12:18.750599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 15:12:18.750614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 15:12:18.750618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:12:18.750622 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:12:18.750626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 15:12:18.750630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 15:12:18.750637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:12:18.750641 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:12:18.750645 | orchestrator | 2025-08-29 15:12:18.750653 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-08-29 15:12:18.750657 | orchestrator | Friday 29 August 2025 15:11:04 +0000 (0:00:01.476) 0:00:54.181 ********* 2025-08-29 15:12:18.750665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 15:12:18.750669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 15:12:18.750673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:12:18.750677 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:12:18.750681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 15:12:18.750688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 15:12:18.750695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:12:18.750699 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:12:18.750708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 15:12:18.750715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 15:12:18.750721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:12:18.750728 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:12:18.750734 | orchestrator | 2025-08-29 15:12:18.750740 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-08-29 15:12:18.750745 | orchestrator | Friday 29 August 2025 15:11:05 +0000 (0:00:01.370) 0:00:55.551 ********* 2025-08-29 15:12:18.750752 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 15:12:18.750767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 15:12:18.750774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 15:12:18.750780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 15:12:18.750787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 15:12:18.750794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 15:12:18.750809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:12:18.750819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:12:18.750825 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:12:18.750831 | orchestrator | 2025-08-29 15:12:18.750837 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-08-29 15:12:18.750843 | orchestrator | Friday 29 August 2025 15:11:10 +0000 (0:00:04.805) 0:01:00.356 ********* 2025-08-29 15:12:18.750849 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:12:18.750856 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:12:18.750862 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:12:18.750867 | orchestrator | 2025-08-29 15:12:18.750875 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-08-29 15:12:18.750879 | orchestrator | Friday 29 August 2025 15:11:13 +0000 (0:00:02.964) 0:01:03.321 ********* 2025-08-29 15:12:18.750883 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 15:12:18.750887 | orchestrator | 2025-08-29 15:12:18.750890 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-08-29 15:12:18.750894 | orchestrator | Friday 29 August 2025 15:11:16 +0000 (0:00:02.593) 0:01:05.914 ********* 2025-08-29 15:12:18.750898 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:12:18.750901 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:12:18.750905 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:12:18.750909 | orchestrator | 2025-08-29 15:12:18.750912 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-08-29 15:12:18.750916 | orchestrator | Friday 29 August 2025 15:11:16 +0000 (0:00:00.678) 0:01:06.592 ********* 2025-08-29 15:12:18.750920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 15:12:18.750931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 15:12:18.750939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 15:12:18.750943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 15:12:18.750947 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 15:12:18.750951 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 15:12:18.750960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:12:18.750970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:12:18.750977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:12:18.750984 | orchestrator | 2025-08-29 15:12:18.750990 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-08-29 15:12:18.750996 | orchestrator | Friday 29 August 2025 15:11:26 +0000 (0:00:09.358) 0:01:15.951 ********* 2025-08-29 15:12:18.751006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 15:12:18.751011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 15:12:18.751015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:12:18.751022 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:12:18.751026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 15:12:18.751033 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 15:12:18.751041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:12:18.751048 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:12:18.751054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-08-29 15:12:18.751060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-08-29 15:12:18.751071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:12:18.751077 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:12:18.751082 | orchestrator | 2025-08-29 15:12:18.751088 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-08-29 15:12:18.751093 | orchestrator | Friday 29 August 2025 15:11:27 +0000 (0:00:00.826) 0:01:16.777 ********* 2025-08-29 15:12:18.751103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 15:12:18.751114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 15:12:18.751121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-08-29 15:12:18.751128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 15:12:18.751141 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 15:12:18.751149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-08-29 15:12:18.751157 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:12:18.751167 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:12:18.751172 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:12:18.751176 | orchestrator | 2025-08-29 15:12:18.751180 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-08-29 15:12:18.751185 | orchestrator | Friday 29 August 2025 15:11:29 +0000 (0:00:02.824) 0:01:19.601 ********* 2025-08-29 15:12:18.751193 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:12:18.751198 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:12:18.751202 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:12:18.751206 | orchestrator | 2025-08-29 15:12:18.751210 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-08-29 15:12:18.751215 | orchestrator | Friday 29 August 2025 15:11:30 +0000 (0:00:00.375) 0:01:19.977 ********* 2025-08-29 15:12:18.751219 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:12:18.751223 | orchestrator | 2025-08-29 15:12:18.751252 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-08-29 15:12:18.751257 | orchestrator | Friday 29 August 2025 15:11:33 +0000 (0:00:02.763) 0:01:22.741 ********* 2025-08-29 15:12:18.751261 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:12:18.751265 | orchestrator | 2025-08-29 15:12:18.751269 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-08-29 15:12:18.751274 | orchestrator | Friday 29 August 2025 15:11:35 +0000 (0:00:02.288) 0:01:25.029 ********* 2025-08-29 15:12:18.751278 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:12:18.751282 | orchestrator | 2025-08-29 15:12:18.751286 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-08-29 15:12:18.751290 | orchestrator | Friday 29 August 2025 15:11:46 +0000 (0:00:11.374) 0:01:36.403 ********* 2025-08-29 15:12:18.751295 | orchestrator | 2025-08-29 15:12:18.751299 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-08-29 15:12:18.751303 | orchestrator | Friday 29 August 2025 15:11:46 +0000 (0:00:00.175) 0:01:36.579 ********* 2025-08-29 15:12:18.751307 | orchestrator | 2025-08-29 15:12:18.751311 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-08-29 15:12:18.751315 | orchestrator | Friday 29 August 2025 15:11:46 +0000 (0:00:00.097) 0:01:36.676 ********* 2025-08-29 15:12:18.751319 | orchestrator | 2025-08-29 15:12:18.751324 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-08-29 15:12:18.751328 | orchestrator | Friday 29 August 2025 15:11:47 +0000 (0:00:00.079) 0:01:36.756 ********* 2025-08-29 15:12:18.751332 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:12:18.751336 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:12:18.751340 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:12:18.751345 | orchestrator | 2025-08-29 15:12:18.751349 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-08-29 15:12:18.751353 | orchestrator | Friday 29 August 2025 15:11:55 +0000 (0:00:08.309) 0:01:45.065 ********* 2025-08-29 15:12:18.751357 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:12:18.751361 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:12:18.751366 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:12:18.751370 | orchestrator | 2025-08-29 15:12:18.751374 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-08-29 15:12:18.751378 | orchestrator | Friday 29 August 2025 15:12:07 +0000 (0:00:12.356) 0:01:57.422 ********* 2025-08-29 15:12:18.751382 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:12:18.751386 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:12:18.751390 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:12:18.751395 | orchestrator | 2025-08-29 15:12:18.751399 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:12:18.751408 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-08-29 15:12:18.751414 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 15:12:18.751418 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 15:12:18.751422 | orchestrator | 2025-08-29 15:12:18.751427 | orchestrator | 2025-08-29 15:12:18.751431 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:12:18.751439 | orchestrator | Friday 29 August 2025 15:12:15 +0000 (0:00:07.549) 0:02:04.971 ********* 2025-08-29 15:12:18.751443 | orchestrator | =============================================================================== 2025-08-29 15:12:18.751447 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 16.85s 2025-08-29 15:12:18.751454 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 12.36s 2025-08-29 15:12:18.751458 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.37s 2025-08-29 15:12:18.751463 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 9.36s 2025-08-29 15:12:18.751467 | orchestrator | barbican : Restart barbican-api container ------------------------------- 8.31s 2025-08-29 15:12:18.751471 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 7.55s 2025-08-29 15:12:18.751475 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.58s 2025-08-29 15:12:18.751480 | orchestrator | barbican : Copying over config.json files for services ------------------ 4.81s 2025-08-29 15:12:18.751484 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 4.61s 2025-08-29 15:12:18.751488 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.45s 2025-08-29 15:12:18.751493 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.17s 2025-08-29 15:12:18.751497 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.48s 2025-08-29 15:12:18.751502 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.34s 2025-08-29 15:12:18.751508 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.96s 2025-08-29 15:12:18.751515 | orchestrator | barbican : Check barbican containers ------------------------------------ 2.82s 2025-08-29 15:12:18.751521 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.76s 2025-08-29 15:12:18.751527 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 2.59s 2025-08-29 15:12:18.751533 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.30s 2025-08-29 15:12:18.751538 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.29s 2025-08-29 15:12:18.751546 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 2.09s 2025-08-29 15:12:18.751552 | orchestrator | 2025-08-29 15:12:18 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:12:18.751559 | orchestrator | 2025-08-29 15:12:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:21.787793 | orchestrator | 2025-08-29 15:12:21 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:12:21.788112 | orchestrator | 2025-08-29 15:12:21 | INFO  | Task d8f0f777-340d-46e4-a88d-4a3fc5176299 is in state STARTED 2025-08-29 15:12:21.789066 | orchestrator | 2025-08-29 15:12:21 | INFO  | Task 5898feea-50e0-4b8f-af79-0537f85d6869 is in state STARTED 2025-08-29 15:12:21.790169 | orchestrator | 2025-08-29 15:12:21 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:12:21.790214 | orchestrator | 2025-08-29 15:12:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:24.823398 | orchestrator | 2025-08-29 15:12:24 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:12:24.824618 | orchestrator | 2025-08-29 15:12:24 | INFO  | Task d8f0f777-340d-46e4-a88d-4a3fc5176299 is in state STARTED 2025-08-29 15:12:24.826247 | orchestrator | 2025-08-29 15:12:24 | INFO  | Task 5898feea-50e0-4b8f-af79-0537f85d6869 is in state STARTED 2025-08-29 15:12:24.831438 | orchestrator | 2025-08-29 15:12:24 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:12:24.831529 | orchestrator | 2025-08-29 15:12:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:27.872082 | orchestrator | 2025-08-29 15:12:27 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:12:27.872372 | orchestrator | 2025-08-29 15:12:27 | INFO  | Task d8f0f777-340d-46e4-a88d-4a3fc5176299 is in state STARTED 2025-08-29 15:12:27.874478 | orchestrator | 2025-08-29 15:12:27 | INFO  | Task 5898feea-50e0-4b8f-af79-0537f85d6869 is in state STARTED 2025-08-29 15:12:27.876185 | orchestrator | 2025-08-29 15:12:27 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:12:27.876274 | orchestrator | 2025-08-29 15:12:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:30.915270 | orchestrator | 2025-08-29 15:12:30 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:12:30.919171 | orchestrator | 2025-08-29 15:12:30 | INFO  | Task d8f0f777-340d-46e4-a88d-4a3fc5176299 is in state STARTED 2025-08-29 15:12:30.922078 | orchestrator | 2025-08-29 15:12:30 | INFO  | Task 5898feea-50e0-4b8f-af79-0537f85d6869 is in state STARTED 2025-08-29 15:12:30.925162 | orchestrator | 2025-08-29 15:12:30 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:12:30.925482 | orchestrator | 2025-08-29 15:12:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:33.959669 | orchestrator | 2025-08-29 15:12:33 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:12:33.959953 | orchestrator | 2025-08-29 15:12:33 | INFO  | Task d8f0f777-340d-46e4-a88d-4a3fc5176299 is in state STARTED 2025-08-29 15:12:33.960790 | orchestrator | 2025-08-29 15:12:33 | INFO  | Task 5898feea-50e0-4b8f-af79-0537f85d6869 is in state STARTED 2025-08-29 15:12:33.962609 | orchestrator | 2025-08-29 15:12:33 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:12:33.962647 | orchestrator | 2025-08-29 15:12:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:37.018763 | orchestrator | 2025-08-29 15:12:37 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:12:37.019777 | orchestrator | 2025-08-29 15:12:37 | INFO  | Task d8f0f777-340d-46e4-a88d-4a3fc5176299 is in state STARTED 2025-08-29 15:12:37.022949 | orchestrator | 2025-08-29 15:12:37 | INFO  | Task 5898feea-50e0-4b8f-af79-0537f85d6869 is in state STARTED 2025-08-29 15:12:37.024834 | orchestrator | 2025-08-29 15:12:37 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:12:37.026951 | orchestrator | 2025-08-29 15:12:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:40.071844 | orchestrator | 2025-08-29 15:12:40 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:12:40.072828 | orchestrator | 2025-08-29 15:12:40 | INFO  | Task d8f0f777-340d-46e4-a88d-4a3fc5176299 is in state STARTED 2025-08-29 15:12:40.074117 | orchestrator | 2025-08-29 15:12:40 | INFO  | Task 5898feea-50e0-4b8f-af79-0537f85d6869 is in state STARTED 2025-08-29 15:12:40.077092 | orchestrator | 2025-08-29 15:12:40 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:12:40.077128 | orchestrator | 2025-08-29 15:12:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:43.114936 | orchestrator | 2025-08-29 15:12:43 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:12:43.117345 | orchestrator | 2025-08-29 15:12:43 | INFO  | Task d8f0f777-340d-46e4-a88d-4a3fc5176299 is in state STARTED 2025-08-29 15:12:43.120034 | orchestrator | 2025-08-29 15:12:43 | INFO  | Task 5898feea-50e0-4b8f-af79-0537f85d6869 is in state STARTED 2025-08-29 15:12:43.124308 | orchestrator | 2025-08-29 15:12:43 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:12:43.124362 | orchestrator | 2025-08-29 15:12:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:46.194824 | orchestrator | 2025-08-29 15:12:46 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:12:46.194881 | orchestrator | 2025-08-29 15:12:46 | INFO  | Task d8f0f777-340d-46e4-a88d-4a3fc5176299 is in state STARTED 2025-08-29 15:12:46.196514 | orchestrator | 2025-08-29 15:12:46 | INFO  | Task 5898feea-50e0-4b8f-af79-0537f85d6869 is in state STARTED 2025-08-29 15:12:46.197388 | orchestrator | 2025-08-29 15:12:46 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:12:46.197783 | orchestrator | 2025-08-29 15:12:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:49.261168 | orchestrator | 2025-08-29 15:12:49 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:12:49.263809 | orchestrator | 2025-08-29 15:12:49 | INFO  | Task d8f0f777-340d-46e4-a88d-4a3fc5176299 is in state STARTED 2025-08-29 15:12:49.265039 | orchestrator | 2025-08-29 15:12:49 | INFO  | Task 5898feea-50e0-4b8f-af79-0537f85d6869 is in state STARTED 2025-08-29 15:12:49.266721 | orchestrator | 2025-08-29 15:12:49 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:12:49.266873 | orchestrator | 2025-08-29 15:12:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:52.304825 | orchestrator | 2025-08-29 15:12:52 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:12:52.307015 | orchestrator | 2025-08-29 15:12:52 | INFO  | Task d8f0f777-340d-46e4-a88d-4a3fc5176299 is in state STARTED 2025-08-29 15:12:52.309321 | orchestrator | 2025-08-29 15:12:52 | INFO  | Task 5898feea-50e0-4b8f-af79-0537f85d6869 is in state STARTED 2025-08-29 15:12:52.311590 | orchestrator | 2025-08-29 15:12:52 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:12:52.311635 | orchestrator | 2025-08-29 15:12:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:55.341126 | orchestrator | 2025-08-29 15:12:55 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:12:55.341608 | orchestrator | 2025-08-29 15:12:55 | INFO  | Task d8f0f777-340d-46e4-a88d-4a3fc5176299 is in state STARTED 2025-08-29 15:12:55.342494 | orchestrator | 2025-08-29 15:12:55 | INFO  | Task 5898feea-50e0-4b8f-af79-0537f85d6869 is in state STARTED 2025-08-29 15:12:55.343228 | orchestrator | 2025-08-29 15:12:55 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:12:55.352463 | orchestrator | 2025-08-29 15:12:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:12:58.389607 | orchestrator | 2025-08-29 15:12:58 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:12:58.392400 | orchestrator | 2025-08-29 15:12:58 | INFO  | Task d8f0f777-340d-46e4-a88d-4a3fc5176299 is in state STARTED 2025-08-29 15:12:58.395651 | orchestrator | 2025-08-29 15:12:58 | INFO  | Task 5898feea-50e0-4b8f-af79-0537f85d6869 is in state STARTED 2025-08-29 15:12:58.398304 | orchestrator | 2025-08-29 15:12:58 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:12:58.398376 | orchestrator | 2025-08-29 15:12:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:01.422420 | orchestrator | 2025-08-29 15:13:01 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:13:01.424603 | orchestrator | 2025-08-29 15:13:01 | INFO  | Task d8f0f777-340d-46e4-a88d-4a3fc5176299 is in state STARTED 2025-08-29 15:13:01.425057 | orchestrator | 2025-08-29 15:13:01 | INFO  | Task 5898feea-50e0-4b8f-af79-0537f85d6869 is in state STARTED 2025-08-29 15:13:01.425872 | orchestrator | 2025-08-29 15:13:01 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:13:01.425952 | orchestrator | 2025-08-29 15:13:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:04.453429 | orchestrator | 2025-08-29 15:13:04 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:13:04.453537 | orchestrator | 2025-08-29 15:13:04 | INFO  | Task d8f0f777-340d-46e4-a88d-4a3fc5176299 is in state SUCCESS 2025-08-29 15:13:04.454756 | orchestrator | 2025-08-29 15:13:04 | INFO  | Task 67d273c6-2966-43b1-9dc9-bda251e7cbe1 is in state STARTED 2025-08-29 15:13:04.455817 | orchestrator | 2025-08-29 15:13:04 | INFO  | Task 5898feea-50e0-4b8f-af79-0537f85d6869 is in state STARTED 2025-08-29 15:13:04.456427 | orchestrator | 2025-08-29 15:13:04 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:13:04.456445 | orchestrator | 2025-08-29 15:13:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:07.491627 | orchestrator | 2025-08-29 15:13:07 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:13:07.491798 | orchestrator | 2025-08-29 15:13:07 | INFO  | Task 67d273c6-2966-43b1-9dc9-bda251e7cbe1 is in state STARTED 2025-08-29 15:13:07.492380 | orchestrator | 2025-08-29 15:13:07 | INFO  | Task 5898feea-50e0-4b8f-af79-0537f85d6869 is in state STARTED 2025-08-29 15:13:07.493633 | orchestrator | 2025-08-29 15:13:07 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:13:07.493670 | orchestrator | 2025-08-29 15:13:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:10.518370 | orchestrator | 2025-08-29 15:13:10 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:13:10.518815 | orchestrator | 2025-08-29 15:13:10 | INFO  | Task 67d273c6-2966-43b1-9dc9-bda251e7cbe1 is in state STARTED 2025-08-29 15:13:10.519453 | orchestrator | 2025-08-29 15:13:10 | INFO  | Task 5898feea-50e0-4b8f-af79-0537f85d6869 is in state STARTED 2025-08-29 15:13:10.520322 | orchestrator | 2025-08-29 15:13:10 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:13:10.520374 | orchestrator | 2025-08-29 15:13:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:13.543897 | orchestrator | 2025-08-29 15:13:13 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:13:13.544019 | orchestrator | 2025-08-29 15:13:13 | INFO  | Task 67d273c6-2966-43b1-9dc9-bda251e7cbe1 is in state STARTED 2025-08-29 15:13:13.545642 | orchestrator | 2025-08-29 15:13:13 | INFO  | Task 5898feea-50e0-4b8f-af79-0537f85d6869 is in state STARTED 2025-08-29 15:13:13.546488 | orchestrator | 2025-08-29 15:13:13 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:13:13.546501 | orchestrator | 2025-08-29 15:13:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:16.582125 | orchestrator | 2025-08-29 15:13:16 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:13:16.582844 | orchestrator | 2025-08-29 15:13:16 | INFO  | Task 67d273c6-2966-43b1-9dc9-bda251e7cbe1 is in state STARTED 2025-08-29 15:13:16.584707 | orchestrator | 2025-08-29 15:13:16 | INFO  | Task 5898feea-50e0-4b8f-af79-0537f85d6869 is in state STARTED 2025-08-29 15:13:16.586243 | orchestrator | 2025-08-29 15:13:16 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:13:16.586698 | orchestrator | 2025-08-29 15:13:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:19.628299 | orchestrator | 2025-08-29 15:13:19 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:13:19.629783 | orchestrator | 2025-08-29 15:13:19 | INFO  | Task 67d273c6-2966-43b1-9dc9-bda251e7cbe1 is in state STARTED 2025-08-29 15:13:19.631871 | orchestrator | 2025-08-29 15:13:19 | INFO  | Task 5898feea-50e0-4b8f-af79-0537f85d6869 is in state STARTED 2025-08-29 15:13:19.632476 | orchestrator | 2025-08-29 15:13:19 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:13:19.632505 | orchestrator | 2025-08-29 15:13:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:22.663249 | orchestrator | 2025-08-29 15:13:22 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:13:22.663784 | orchestrator | 2025-08-29 15:13:22 | INFO  | Task 67d273c6-2966-43b1-9dc9-bda251e7cbe1 is in state STARTED 2025-08-29 15:13:22.666719 | orchestrator | 2025-08-29 15:13:22 | INFO  | Task 5898feea-50e0-4b8f-af79-0537f85d6869 is in state STARTED 2025-08-29 15:13:22.667488 | orchestrator | 2025-08-29 15:13:22 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:13:22.667537 | orchestrator | 2025-08-29 15:13:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:25.714012 | orchestrator | 2025-08-29 15:13:25 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:13:25.716701 | orchestrator | 2025-08-29 15:13:25 | INFO  | Task 67d273c6-2966-43b1-9dc9-bda251e7cbe1 is in state STARTED 2025-08-29 15:13:25.718299 | orchestrator | 2025-08-29 15:13:25 | INFO  | Task 5898feea-50e0-4b8f-af79-0537f85d6869 is in state STARTED 2025-08-29 15:13:25.719647 | orchestrator | 2025-08-29 15:13:25 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:13:25.719731 | orchestrator | 2025-08-29 15:13:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:28.764147 | orchestrator | 2025-08-29 15:13:28 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:13:28.766546 | orchestrator | 2025-08-29 15:13:28 | INFO  | Task 67d273c6-2966-43b1-9dc9-bda251e7cbe1 is in state STARTED 2025-08-29 15:13:28.767660 | orchestrator | 2025-08-29 15:13:28 | INFO  | Task 5898feea-50e0-4b8f-af79-0537f85d6869 is in state STARTED 2025-08-29 15:13:28.768935 | orchestrator | 2025-08-29 15:13:28 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:13:28.768992 | orchestrator | 2025-08-29 15:13:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:31.798286 | orchestrator | 2025-08-29 15:13:31 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:13:31.799488 | orchestrator | 2025-08-29 15:13:31 | INFO  | Task 67d273c6-2966-43b1-9dc9-bda251e7cbe1 is in state STARTED 2025-08-29 15:13:31.800352 | orchestrator | 2025-08-29 15:13:31 | INFO  | Task 5898feea-50e0-4b8f-af79-0537f85d6869 is in state STARTED 2025-08-29 15:13:31.801391 | orchestrator | 2025-08-29 15:13:31 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:13:31.801599 | orchestrator | 2025-08-29 15:13:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:34.845342 | orchestrator | 2025-08-29 15:13:34 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:13:34.845438 | orchestrator | 2025-08-29 15:13:34 | INFO  | Task 67d273c6-2966-43b1-9dc9-bda251e7cbe1 is in state STARTED 2025-08-29 15:13:34.846226 | orchestrator | 2025-08-29 15:13:34 | INFO  | Task 5898feea-50e0-4b8f-af79-0537f85d6869 is in state STARTED 2025-08-29 15:13:34.846874 | orchestrator | 2025-08-29 15:13:34 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:13:34.846911 | orchestrator | 2025-08-29 15:13:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:37.878584 | orchestrator | 2025-08-29 15:13:37 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:13:37.878684 | orchestrator | 2025-08-29 15:13:37 | INFO  | Task 67d273c6-2966-43b1-9dc9-bda251e7cbe1 is in state STARTED 2025-08-29 15:13:37.880826 | orchestrator | 2025-08-29 15:13:37 | INFO  | Task 5898feea-50e0-4b8f-af79-0537f85d6869 is in state STARTED 2025-08-29 15:13:37.881338 | orchestrator | 2025-08-29 15:13:37 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:13:37.881373 | orchestrator | 2025-08-29 15:13:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:40.912965 | orchestrator | 2025-08-29 15:13:40 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:13:40.913047 | orchestrator | 2025-08-29 15:13:40 | INFO  | Task 67d273c6-2966-43b1-9dc9-bda251e7cbe1 is in state STARTED 2025-08-29 15:13:40.913579 | orchestrator | 2025-08-29 15:13:40 | INFO  | Task 5898feea-50e0-4b8f-af79-0537f85d6869 is in state STARTED 2025-08-29 15:13:40.914406 | orchestrator | 2025-08-29 15:13:40 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:13:40.914417 | orchestrator | 2025-08-29 15:13:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:43.948587 | orchestrator | 2025-08-29 15:13:43 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:13:43.950282 | orchestrator | 2025-08-29 15:13:43 | INFO  | Task 67d273c6-2966-43b1-9dc9-bda251e7cbe1 is in state STARTED 2025-08-29 15:13:43.953294 | orchestrator | 2025-08-29 15:13:43 | INFO  | Task 5898feea-50e0-4b8f-af79-0537f85d6869 is in state STARTED 2025-08-29 15:13:43.954822 | orchestrator | 2025-08-29 15:13:43 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:13:43.954939 | orchestrator | 2025-08-29 15:13:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:47.006574 | orchestrator | 2025-08-29 15:13:47 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:13:47.007844 | orchestrator | 2025-08-29 15:13:47 | INFO  | Task 67d273c6-2966-43b1-9dc9-bda251e7cbe1 is in state STARTED 2025-08-29 15:13:47.010959 | orchestrator | 2025-08-29 15:13:47 | INFO  | Task 5898feea-50e0-4b8f-af79-0537f85d6869 is in state STARTED 2025-08-29 15:13:47.012218 | orchestrator | 2025-08-29 15:13:47 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:13:47.012385 | orchestrator | 2025-08-29 15:13:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:50.059337 | orchestrator | 2025-08-29 15:13:50 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:13:50.059409 | orchestrator | 2025-08-29 15:13:50 | INFO  | Task 67d273c6-2966-43b1-9dc9-bda251e7cbe1 is in state STARTED 2025-08-29 15:13:50.061992 | orchestrator | 2025-08-29 15:13:50 | INFO  | Task 5898feea-50e0-4b8f-af79-0537f85d6869 is in state STARTED 2025-08-29 15:13:50.062570 | orchestrator | 2025-08-29 15:13:50 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:13:50.062597 | orchestrator | 2025-08-29 15:13:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:53.090383 | orchestrator | 2025-08-29 15:13:53 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:13:53.090834 | orchestrator | 2025-08-29 15:13:53 | INFO  | Task 67d273c6-2966-43b1-9dc9-bda251e7cbe1 is in state STARTED 2025-08-29 15:13:53.092477 | orchestrator | 2025-08-29 15:13:53 | INFO  | Task 5898feea-50e0-4b8f-af79-0537f85d6869 is in state STARTED 2025-08-29 15:13:53.092857 | orchestrator | 2025-08-29 15:13:53 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:13:53.092947 | orchestrator | 2025-08-29 15:13:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:56.128297 | orchestrator | 2025-08-29 15:13:56 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:13:56.130516 | orchestrator | 2025-08-29 15:13:56 | INFO  | Task 67d273c6-2966-43b1-9dc9-bda251e7cbe1 is in state STARTED 2025-08-29 15:13:56.132464 | orchestrator | 2025-08-29 15:13:56 | INFO  | Task 5898feea-50e0-4b8f-af79-0537f85d6869 is in state STARTED 2025-08-29 15:13:56.134075 | orchestrator | 2025-08-29 15:13:56 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state STARTED 2025-08-29 15:13:56.134385 | orchestrator | 2025-08-29 15:13:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:13:59.184127 | orchestrator | 2025-08-29 15:13:59 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:13:59.185013 | orchestrator | 2025-08-29 15:13:59 | INFO  | Task caf7e1e6-bf31-4657-a700-5410817050ed is in state STARTED 2025-08-29 15:13:59.186282 | orchestrator | 2025-08-29 15:13:59 | INFO  | Task 67d273c6-2966-43b1-9dc9-bda251e7cbe1 is in state STARTED 2025-08-29 15:13:59.196049 | orchestrator | 2025-08-29 15:13:59 | INFO  | Task 5898feea-50e0-4b8f-af79-0537f85d6869 is in state STARTED 2025-08-29 15:13:59.200035 | orchestrator | 2025-08-29 15:13:59 | INFO  | Task 21d52088-e07d-41c1-a19c-b5b32aa346cb is in state SUCCESS 2025-08-29 15:13:59.202639 | orchestrator | 2025-08-29 15:13:59.202719 | orchestrator | 2025-08-29 15:13:59.202730 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-08-29 15:13:59.202738 | orchestrator | 2025-08-29 15:13:59.202742 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-08-29 15:13:59.202747 | orchestrator | Friday 29 August 2025 15:12:20 +0000 (0:00:00.114) 0:00:00.114 ********* 2025-08-29 15:13:59.202751 | orchestrator | changed: [localhost] 2025-08-29 15:13:59.202756 | orchestrator | 2025-08-29 15:13:59.202760 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-08-29 15:13:59.202764 | orchestrator | Friday 29 August 2025 15:12:21 +0000 (0:00:00.931) 0:00:01.045 ********* 2025-08-29 15:13:59.202768 | orchestrator | changed: [localhost] 2025-08-29 15:13:59.202772 | orchestrator | 2025-08-29 15:13:59.202776 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-08-29 15:13:59.202780 | orchestrator | Friday 29 August 2025 15:12:56 +0000 (0:00:35.360) 0:00:36.405 ********* 2025-08-29 15:13:59.202784 | orchestrator | changed: [localhost] 2025-08-29 15:13:59.202788 | orchestrator | 2025-08-29 15:13:59.202804 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:13:59.202809 | orchestrator | 2025-08-29 15:13:59.202813 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:13:59.202823 | orchestrator | Friday 29 August 2025 15:13:00 +0000 (0:00:04.193) 0:00:40.599 ********* 2025-08-29 15:13:59.202827 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:13:59.202831 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:13:59.202835 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:13:59.202839 | orchestrator | 2025-08-29 15:13:59.202843 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:13:59.202847 | orchestrator | Friday 29 August 2025 15:13:01 +0000 (0:00:00.442) 0:00:41.041 ********* 2025-08-29 15:13:59.202851 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-08-29 15:13:59.202871 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-08-29 15:13:59.202875 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-08-29 15:13:59.202879 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-08-29 15:13:59.202883 | orchestrator | 2025-08-29 15:13:59.202886 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-08-29 15:13:59.202890 | orchestrator | skipping: no hosts matched 2025-08-29 15:13:59.202895 | orchestrator | 2025-08-29 15:13:59.202899 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:13:59.202903 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:13:59.202911 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:13:59.202920 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:13:59.202926 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:13:59.202931 | orchestrator | 2025-08-29 15:13:59.202937 | orchestrator | 2025-08-29 15:13:59.202943 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:13:59.202948 | orchestrator | Friday 29 August 2025 15:13:02 +0000 (0:00:00.953) 0:00:41.995 ********* 2025-08-29 15:13:59.202954 | orchestrator | =============================================================================== 2025-08-29 15:13:59.202960 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 35.36s 2025-08-29 15:13:59.202966 | orchestrator | Download ironic-agent kernel -------------------------------------------- 4.19s 2025-08-29 15:13:59.202971 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.95s 2025-08-29 15:13:59.202977 | orchestrator | Ensure the destination directory exists --------------------------------- 0.93s 2025-08-29 15:13:59.203027 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.44s 2025-08-29 15:13:59.203032 | orchestrator | 2025-08-29 15:13:59.203036 | orchestrator | 2025-08-29 15:13:59.203040 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:13:59.203044 | orchestrator | 2025-08-29 15:13:59.203061 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:13:59.203067 | orchestrator | Friday 29 August 2025 15:09:50 +0000 (0:00:00.266) 0:00:00.266 ********* 2025-08-29 15:13:59.203073 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:13:59.203079 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:13:59.203086 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:13:59.203236 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:13:59.203242 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:13:59.203247 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:13:59.203251 | orchestrator | 2025-08-29 15:13:59.203270 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:13:59.203275 | orchestrator | Friday 29 August 2025 15:09:51 +0000 (0:00:00.643) 0:00:00.910 ********* 2025-08-29 15:13:59.203280 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-08-29 15:13:59.203285 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-08-29 15:13:59.203289 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-08-29 15:13:59.203294 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-08-29 15:13:59.203298 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-08-29 15:13:59.203303 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-08-29 15:13:59.203308 | orchestrator | 2025-08-29 15:13:59.203314 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-08-29 15:13:59.203320 | orchestrator | 2025-08-29 15:13:59.203326 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-08-29 15:13:59.203351 | orchestrator | Friday 29 August 2025 15:09:52 +0000 (0:00:00.615) 0:00:01.526 ********* 2025-08-29 15:13:59.203377 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:13:59.203385 | orchestrator | 2025-08-29 15:13:59.203391 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-08-29 15:13:59.203398 | orchestrator | Friday 29 August 2025 15:09:53 +0000 (0:00:01.104) 0:00:02.630 ********* 2025-08-29 15:13:59.203405 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:13:59.203414 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:13:59.203421 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:13:59.203427 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:13:59.203433 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:13:59.203440 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:13:59.203446 | orchestrator | 2025-08-29 15:13:59.203453 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-08-29 15:13:59.203459 | orchestrator | Friday 29 August 2025 15:09:54 +0000 (0:00:01.377) 0:00:04.008 ********* 2025-08-29 15:13:59.203465 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:13:59.203471 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:13:59.203476 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:13:59.203481 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:13:59.203487 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:13:59.203492 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:13:59.203498 | orchestrator | 2025-08-29 15:13:59.203504 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-08-29 15:13:59.203510 | orchestrator | Friday 29 August 2025 15:09:55 +0000 (0:00:01.294) 0:00:05.303 ********* 2025-08-29 15:13:59.203516 | orchestrator | ok: [testbed-node-0] => { 2025-08-29 15:13:59.203523 | orchestrator |  "changed": false, 2025-08-29 15:13:59.203529 | orchestrator |  "msg": "All assertions passed" 2025-08-29 15:13:59.203536 | orchestrator | } 2025-08-29 15:13:59.203542 | orchestrator | ok: [testbed-node-1] => { 2025-08-29 15:13:59.203550 | orchestrator |  "changed": false, 2025-08-29 15:13:59.203556 | orchestrator |  "msg": "All assertions passed" 2025-08-29 15:13:59.203563 | orchestrator | } 2025-08-29 15:13:59.203569 | orchestrator | ok: [testbed-node-2] => { 2025-08-29 15:13:59.203576 | orchestrator |  "changed": false, 2025-08-29 15:13:59.203582 | orchestrator |  "msg": "All assertions passed" 2025-08-29 15:13:59.203586 | orchestrator | } 2025-08-29 15:13:59.203590 | orchestrator | ok: [testbed-node-3] => { 2025-08-29 15:13:59.203593 | orchestrator |  "changed": false, 2025-08-29 15:13:59.203597 | orchestrator |  "msg": "All assertions passed" 2025-08-29 15:13:59.203601 | orchestrator | } 2025-08-29 15:13:59.203605 | orchestrator | ok: [testbed-node-4] => { 2025-08-29 15:13:59.203609 | orchestrator |  "changed": false, 2025-08-29 15:13:59.203612 | orchestrator |  "msg": "All assertions passed" 2025-08-29 15:13:59.203616 | orchestrator | } 2025-08-29 15:13:59.203620 | orchestrator | ok: [testbed-node-5] => { 2025-08-29 15:13:59.203624 | orchestrator |  "changed": false, 2025-08-29 15:13:59.203627 | orchestrator |  "msg": "All assertions passed" 2025-08-29 15:13:59.203631 | orchestrator | } 2025-08-29 15:13:59.203635 | orchestrator | 2025-08-29 15:13:59.203638 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-08-29 15:13:59.203642 | orchestrator | Friday 29 August 2025 15:09:56 +0000 (0:00:00.777) 0:00:06.080 ********* 2025-08-29 15:13:59.203646 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:59.203650 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:59.203654 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:59.203660 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:13:59.203667 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:13:59.203673 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:13:59.203678 | orchestrator | 2025-08-29 15:13:59.203688 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-08-29 15:13:59.203694 | orchestrator | Friday 29 August 2025 15:09:57 +0000 (0:00:00.638) 0:00:06.718 ********* 2025-08-29 15:13:59.203708 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-08-29 15:13:59.203715 | orchestrator | 2025-08-29 15:13:59.203721 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-08-29 15:13:59.203728 | orchestrator | Friday 29 August 2025 15:10:00 +0000 (0:00:03.471) 0:00:10.189 ********* 2025-08-29 15:13:59.203733 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-08-29 15:13:59.203738 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-08-29 15:13:59.203742 | orchestrator | 2025-08-29 15:13:59.203746 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-08-29 15:13:59.203750 | orchestrator | Friday 29 August 2025 15:10:07 +0000 (0:00:06.619) 0:00:16.808 ********* 2025-08-29 15:13:59.203759 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 15:13:59.203763 | orchestrator | 2025-08-29 15:13:59.203767 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-08-29 15:13:59.203771 | orchestrator | Friday 29 August 2025 15:10:10 +0000 (0:00:03.417) 0:00:20.226 ********* 2025-08-29 15:13:59.203775 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 15:13:59.203778 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-08-29 15:13:59.203782 | orchestrator | 2025-08-29 15:13:59.203786 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-08-29 15:13:59.203790 | orchestrator | Friday 29 August 2025 15:10:14 +0000 (0:00:03.855) 0:00:24.081 ********* 2025-08-29 15:13:59.203793 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 15:13:59.203797 | orchestrator | 2025-08-29 15:13:59.203801 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-08-29 15:13:59.203805 | orchestrator | Friday 29 August 2025 15:10:18 +0000 (0:00:03.359) 0:00:27.441 ********* 2025-08-29 15:13:59.203809 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-08-29 15:13:59.203813 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-08-29 15:13:59.203816 | orchestrator | 2025-08-29 15:13:59.203820 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-08-29 15:13:59.203824 | orchestrator | Friday 29 August 2025 15:10:25 +0000 (0:00:07.876) 0:00:35.317 ********* 2025-08-29 15:13:59.203828 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:59.203831 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:59.203841 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:59.203845 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:13:59.203849 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:13:59.203853 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:13:59.203856 | orchestrator | 2025-08-29 15:13:59.203860 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-08-29 15:13:59.203864 | orchestrator | Friday 29 August 2025 15:10:26 +0000 (0:00:00.832) 0:00:36.150 ********* 2025-08-29 15:13:59.203867 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:59.203871 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:59.203875 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:59.203878 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:13:59.203882 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:13:59.203886 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:13:59.203889 | orchestrator | 2025-08-29 15:13:59.203893 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-08-29 15:13:59.203897 | orchestrator | Friday 29 August 2025 15:10:28 +0000 (0:00:02.010) 0:00:38.160 ********* 2025-08-29 15:13:59.203900 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:13:59.203904 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:13:59.203908 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:13:59.203911 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:13:59.203915 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:13:59.203933 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:13:59.203937 | orchestrator | 2025-08-29 15:13:59.203941 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-08-29 15:13:59.203945 | orchestrator | Friday 29 August 2025 15:10:29 +0000 (0:00:01.101) 0:00:39.261 ********* 2025-08-29 15:13:59.203948 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:59.203952 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:59.203956 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:13:59.203959 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:59.203963 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:13:59.203967 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:13:59.203970 | orchestrator | 2025-08-29 15:13:59.203974 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-08-29 15:13:59.203978 | orchestrator | Friday 29 August 2025 15:10:31 +0000 (0:00:02.104) 0:00:41.366 ********* 2025-08-29 15:13:59.203985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:13:59.203996 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:13:59.204001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:13:59.204011 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 15:13:59.204021 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 15:13:59.204025 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 15:13:59.204029 | orchestrator | 2025-08-29 15:13:59.204033 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-08-29 15:13:59.204037 | orchestrator | Friday 29 August 2025 15:10:34 +0000 (0:00:02.787) 0:00:44.153 ********* 2025-08-29 15:13:59.204041 | orchestrator | [WARNING]: Skipped 2025-08-29 15:13:59.204045 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-08-29 15:13:59.204049 | orchestrator | due to this access issue: 2025-08-29 15:13:59.204053 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-08-29 15:13:59.204057 | orchestrator | a directory 2025-08-29 15:13:59.204061 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 15:13:59.204065 | orchestrator | 2025-08-29 15:13:59.204068 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-08-29 15:13:59.204072 | orchestrator | Friday 29 August 2025 15:10:35 +0000 (0:00:00.879) 0:00:45.033 ********* 2025-08-29 15:13:59.204078 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:13:59.204084 | orchestrator | 2025-08-29 15:13:59.204087 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-08-29 15:13:59.204091 | orchestrator | Friday 29 August 2025 15:10:36 +0000 (0:00:01.190) 0:00:46.224 ********* 2025-08-29 15:13:59.204095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:13:59.204106 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:13:59.204111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:13:59.204115 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 15:13:59.204122 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 15:13:59.204126 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 15:13:59.204132 | orchestrator | 2025-08-29 15:13:59.204151 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-08-29 15:13:59.204159 | orchestrator | Friday 29 August 2025 15:10:39 +0000 (0:00:02.677) 0:00:48.901 ********* 2025-08-29 15:13:59.204163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:13:59.204167 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:59.204171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:13:59.204175 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:59.204179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:13:59.204183 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:59.204190 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:13:59.204198 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:13:59.204207 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:13:59.204211 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:13:59.204215 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:13:59.204219 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:13:59.204223 | orchestrator | 2025-08-29 15:13:59.204227 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-08-29 15:13:59.204231 | orchestrator | Friday 29 August 2025 15:10:41 +0000 (0:00:02.082) 0:00:50.984 ********* 2025-08-29 15:13:59.204234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:13:59.204238 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:59.204249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:13:59.204256 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:59.204260 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:13:59.204264 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:13:59.204271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:13:59.204275 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:59.204279 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:13:59.204283 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:13:59.204287 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:13:59.204291 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:13:59.204294 | orchestrator | 2025-08-29 15:13:59.204298 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-08-29 15:13:59.204302 | orchestrator | Friday 29 August 2025 15:10:44 +0000 (0:00:02.492) 0:00:53.476 ********* 2025-08-29 15:13:59.204306 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:13:59.204309 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:59.204313 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:59.204317 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:59.204324 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:13:59.204327 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:13:59.204331 | orchestrator | 2025-08-29 15:13:59.204335 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-08-29 15:13:59.204339 | orchestrator | Friday 29 August 2025 15:10:46 +0000 (0:00:02.037) 0:00:55.513 ********* 2025-08-29 15:13:59.204342 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:59.204346 | orchestrator | 2025-08-29 15:13:59.204353 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-08-29 15:13:59.204357 | orchestrator | Friday 29 August 2025 15:10:46 +0000 (0:00:00.117) 0:00:55.631 ********* 2025-08-29 15:13:59.204360 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:59.204364 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:59.204368 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:59.204372 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:13:59.204375 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:13:59.204379 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:13:59.204383 | orchestrator | 2025-08-29 15:13:59.204387 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-08-29 15:13:59.204390 | orchestrator | Friday 29 August 2025 15:10:46 +0000 (0:00:00.624) 0:00:56.256 ********* 2025-08-29 15:13:59.204681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:13:59.204706 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:59.204715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:13:59.204721 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:59.204727 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:13:59.204743 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:13:59.204750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:13:59.204764 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:59.204771 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:13:59.204777 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:13:59.204793 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:13:59.204799 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:13:59.204804 | orchestrator | 2025-08-29 15:13:59.204811 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-08-29 15:13:59.204817 | orchestrator | Friday 29 August 2025 15:10:49 +0000 (0:00:02.177) 0:00:58.433 ********* 2025-08-29 15:13:59.204824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:13:59.204830 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:13:59.204848 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:13:59.204855 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 15:13:59.204867 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 15:13:59.204874 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 15:13:59.204881 | orchestrator | 2025-08-29 15:13:59.204889 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-08-29 15:13:59.204900 | orchestrator | Friday 29 August 2025 15:10:53 +0000 (0:00:04.112) 0:01:02.545 ********* 2025-08-29 15:13:59.204907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:13:59.204917 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 15:13:59.204926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:13:59.204932 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:13:59.204938 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 15:13:59.204949 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 15:13:59.204955 | orchestrator | 2025-08-29 15:13:59.204961 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-08-29 15:13:59.204966 | orchestrator | Friday 29 August 2025 15:11:01 +0000 (0:00:07.875) 0:01:10.421 ********* 2025-08-29 15:13:59.204976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:13:59.205112 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:59.205125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:13:59.205129 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:59.205134 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:13:59.205325 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:13:59.205333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:13:59.205339 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:59.205345 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:13:59.205351 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:13:59.205363 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:13:59.205369 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:13:59.205375 | orchestrator | 2025-08-29 15:13:59.205381 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-08-29 15:13:59.205386 | orchestrator | Friday 29 August 2025 15:11:04 +0000 (0:00:03.772) 0:01:14.193 ********* 2025-08-29 15:13:59.205392 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:13:59.205398 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:13:59.205403 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:13:59.205410 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:13:59.205416 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:13:59.205422 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:13:59.205428 | orchestrator | 2025-08-29 15:13:59.205434 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-08-29 15:13:59.205445 | orchestrator | Friday 29 August 2025 15:11:08 +0000 (0:00:03.433) 0:01:17.627 ********* 2025-08-29 15:13:59.205453 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:13:59.205466 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:13:59.205472 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:13:59.205478 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:13:59.205482 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:13:59.205486 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:13:59.205497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:13:59.205506 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:13:59.205510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:13:59.205519 | orchestrator | 2025-08-29 15:13:59.205523 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-08-29 15:13:59.205527 | orchestrator | Friday 29 August 2025 15:11:12 +0000 (0:00:04.764) 0:01:22.391 ********* 2025-08-29 15:13:59.205531 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:59.205535 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:59.205538 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:59.205542 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:13:59.205546 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:13:59.205550 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:13:59.205553 | orchestrator | 2025-08-29 15:13:59.205557 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-08-29 15:13:59.205561 | orchestrator | Friday 29 August 2025 15:11:17 +0000 (0:00:04.181) 0:01:26.572 ********* 2025-08-29 15:13:59.205565 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:59.205568 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:59.205572 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:59.205576 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:13:59.205580 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:13:59.205583 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:13:59.205587 | orchestrator | 2025-08-29 15:13:59.205591 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-08-29 15:13:59.205595 | orchestrator | Friday 29 August 2025 15:11:20 +0000 (0:00:03.128) 0:01:29.701 ********* 2025-08-29 15:13:59.205599 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:59.205603 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:59.205607 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:59.205610 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:13:59.205614 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:13:59.205618 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:13:59.205621 | orchestrator | 2025-08-29 15:13:59.205625 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-08-29 15:13:59.205629 | orchestrator | Friday 29 August 2025 15:11:23 +0000 (0:00:02.742) 0:01:32.443 ********* 2025-08-29 15:13:59.205633 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:59.205636 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:59.205640 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:59.205644 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:13:59.205647 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:13:59.205651 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:13:59.205655 | orchestrator | 2025-08-29 15:13:59.205659 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-08-29 15:13:59.205663 | orchestrator | Friday 29 August 2025 15:11:25 +0000 (0:00:02.474) 0:01:34.918 ********* 2025-08-29 15:13:59.205666 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:59.205670 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:59.205674 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:13:59.205678 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:13:59.205681 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:13:59.205685 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:59.205689 | orchestrator | 2025-08-29 15:13:59.205696 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-08-29 15:13:59.205704 | orchestrator | Friday 29 August 2025 15:11:27 +0000 (0:00:02.266) 0:01:37.185 ********* 2025-08-29 15:13:59.205708 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:59.205712 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:59.205715 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:59.205719 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:13:59.205723 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:13:59.205726 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:13:59.205730 | orchestrator | 2025-08-29 15:13:59.205734 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-08-29 15:13:59.205738 | orchestrator | Friday 29 August 2025 15:11:29 +0000 (0:00:01.887) 0:01:39.072 ********* 2025-08-29 15:13:59.205742 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-08-29 15:13:59.205746 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:59.205750 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-08-29 15:13:59.205754 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:59.205757 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-08-29 15:13:59.205761 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:59.205765 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-08-29 15:13:59.205769 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:13:59.205775 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-08-29 15:13:59.205779 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:13:59.205783 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-08-29 15:13:59.205787 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:13:59.205791 | orchestrator | 2025-08-29 15:13:59.205794 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-08-29 15:13:59.205798 | orchestrator | Friday 29 August 2025 15:11:31 +0000 (0:00:01.991) 0:01:41.064 ********* 2025-08-29 15:13:59.205802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:13:59.205806 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:59.205810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:13:59.205819 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:59.205826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:13:59.205830 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:59.205834 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:13:59.205838 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:13:59.205846 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:13:59.205850 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:13:59.205854 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:13:59.205858 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:13:59.205862 | orchestrator | 2025-08-29 15:13:59.205866 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-08-29 15:13:59.205870 | orchestrator | Friday 29 August 2025 15:11:33 +0000 (0:00:02.237) 0:01:43.302 ********* 2025-08-29 15:13:59.205873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:13:59.205881 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:59.205888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:13:59.205893 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:59.205900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:13:59.205905 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:59.205909 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:13:59.205913 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:13:59.205917 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:13:59.205924 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:13:59.205928 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:13:59.205932 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:13:59.205935 | orchestrator | 2025-08-29 15:13:59.205939 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-08-29 15:13:59.205943 | orchestrator | Friday 29 August 2025 15:11:37 +0000 (0:00:03.409) 0:01:46.712 ********* 2025-08-29 15:13:59.205947 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:59.205951 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:59.205955 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:59.205960 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:13:59.205967 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:13:59.205971 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:13:59.205975 | orchestrator | 2025-08-29 15:13:59.205980 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-08-29 15:13:59.205984 | orchestrator | Friday 29 August 2025 15:11:39 +0000 (0:00:02.233) 0:01:48.945 ********* 2025-08-29 15:13:59.205988 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:59.205992 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:59.205996 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:59.206001 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:13:59.206005 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:13:59.206009 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:13:59.206057 | orchestrator | 2025-08-29 15:13:59.206062 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-08-29 15:13:59.206066 | orchestrator | Friday 29 August 2025 15:11:43 +0000 (0:00:04.023) 0:01:52.969 ********* 2025-08-29 15:13:59.206070 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:59.206074 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:13:59.206078 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:59.206082 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:59.206087 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:13:59.206091 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:13:59.206096 | orchestrator | 2025-08-29 15:13:59.206100 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-08-29 15:13:59.206104 | orchestrator | Friday 29 August 2025 15:11:45 +0000 (0:00:02.336) 0:01:55.305 ********* 2025-08-29 15:13:59.206108 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:59.206113 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:13:59.206117 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:59.206124 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:59.206129 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:13:59.206133 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:13:59.206177 | orchestrator | 2025-08-29 15:13:59.206182 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-08-29 15:13:59.206187 | orchestrator | Friday 29 August 2025 15:11:48 +0000 (0:00:02.856) 0:01:58.162 ********* 2025-08-29 15:13:59.206191 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:13:59.206200 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:59.206205 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:59.206209 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:13:59.206213 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:59.206217 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:13:59.206222 | orchestrator | 2025-08-29 15:13:59.206226 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-08-29 15:13:59.206230 | orchestrator | Friday 29 August 2025 15:11:51 +0000 (0:00:03.139) 0:02:01.302 ********* 2025-08-29 15:13:59.206235 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:59.206239 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:59.206243 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:59.206247 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:13:59.206250 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:13:59.206254 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:13:59.206258 | orchestrator | 2025-08-29 15:13:59.206262 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-08-29 15:13:59.206266 | orchestrator | Friday 29 August 2025 15:11:54 +0000 (0:00:02.372) 0:02:03.674 ********* 2025-08-29 15:13:59.206269 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:59.206273 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:59.206277 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:13:59.206281 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:13:59.206284 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:59.206288 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:13:59.206292 | orchestrator | 2025-08-29 15:13:59.206296 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-08-29 15:13:59.206300 | orchestrator | Friday 29 August 2025 15:11:58 +0000 (0:00:03.788) 0:02:07.462 ********* 2025-08-29 15:13:59.206303 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:59.206307 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:59.206311 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:59.206314 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:13:59.206318 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:13:59.206322 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:13:59.206326 | orchestrator | 2025-08-29 15:13:59.206330 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-08-29 15:13:59.206333 | orchestrator | Friday 29 August 2025 15:12:01 +0000 (0:00:02.988) 0:02:10.451 ********* 2025-08-29 15:13:59.206337 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:59.206341 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:13:59.206345 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:59.206349 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:13:59.206353 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:59.206356 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:13:59.206360 | orchestrator | 2025-08-29 15:13:59.206364 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-08-29 15:13:59.206368 | orchestrator | Friday 29 August 2025 15:12:04 +0000 (0:00:03.624) 0:02:14.076 ********* 2025-08-29 15:13:59.206371 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-08-29 15:13:59.206376 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:13:59.206380 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-08-29 15:13:59.206384 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:59.206388 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-08-29 15:13:59.206392 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:59.206395 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-08-29 15:13:59.206401 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:13:59.206406 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-08-29 15:13:59.206416 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:59.206426 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-08-29 15:13:59.206433 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:13:59.206439 | orchestrator | 2025-08-29 15:13:59.206444 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-08-29 15:13:59.206449 | orchestrator | Friday 29 August 2025 15:12:08 +0000 (0:00:04.276) 0:02:18.352 ********* 2025-08-29 15:13:59.206459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:13:59.206465 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:59.206471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:13:59.206477 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:59.206482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-08-29 15:13:59.206488 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:59.206495 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:13:59.206508 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:13:59.206518 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:13:59.206525 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:13:59.206540 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-08-29 15:13:59.206544 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:13:59.206548 | orchestrator | 2025-08-29 15:13:59.206551 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-08-29 15:13:59.206555 | orchestrator | Friday 29 August 2025 15:12:12 +0000 (0:00:03.460) 0:02:21.812 ********* 2025-08-29 15:13:59.206559 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:13:59.206564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:13:59.206573 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 15:13:59.206581 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 15:13:59.206591 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-08-29 15:13:59.206597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-08-29 15:13:59.206608 | orchestrator | 2025-08-29 15:13:59.206615 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-08-29 15:13:59.206621 | orchestrator | Friday 29 August 2025 15:12:15 +0000 (0:00:03.042) 0:02:24.858 ********* 2025-08-29 15:13:59.206626 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:13:59.206632 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:13:59.206638 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:13:59.206644 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:13:59.206649 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:13:59.206654 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:13:59.206661 | orchestrator | 2025-08-29 15:13:59.206667 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-08-29 15:13:59.206673 | orchestrator | Friday 29 August 2025 15:12:16 +0000 (0:00:01.031) 0:02:25.890 ********* 2025-08-29 15:13:59.206686 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:13:59.206693 | orchestrator | 2025-08-29 15:13:59.206697 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-08-29 15:13:59.206701 | orchestrator | Friday 29 August 2025 15:12:18 +0000 (0:00:02.295) 0:02:28.186 ********* 2025-08-29 15:13:59.206705 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:13:59.206708 | orchestrator | 2025-08-29 15:13:59.206712 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-08-29 15:13:59.206716 | orchestrator | Friday 29 August 2025 15:12:21 +0000 (0:00:02.298) 0:02:30.484 ********* 2025-08-29 15:13:59.206719 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:13:59.206723 | orchestrator | 2025-08-29 15:13:59.206727 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-08-29 15:13:59.206731 | orchestrator | Friday 29 August 2025 15:13:01 +0000 (0:00:40.156) 0:03:10.640 ********* 2025-08-29 15:13:59.206734 | orchestrator | 2025-08-29 15:13:59.206738 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-08-29 15:13:59.206742 | orchestrator | Friday 29 August 2025 15:13:01 +0000 (0:00:00.124) 0:03:10.764 ********* 2025-08-29 15:13:59.206746 | orchestrator | 2025-08-29 15:13:59.206749 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-08-29 15:13:59.206753 | orchestrator | Friday 29 August 2025 15:13:01 +0000 (0:00:00.331) 0:03:11.096 ********* 2025-08-29 15:13:59.206757 | orchestrator | 2025-08-29 15:13:59.206761 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-08-29 15:13:59.206764 | orchestrator | Friday 29 August 2025 15:13:01 +0000 (0:00:00.119) 0:03:11.215 ********* 2025-08-29 15:13:59.206768 | orchestrator | 2025-08-29 15:13:59.206772 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-08-29 15:13:59.206779 | orchestrator | Friday 29 August 2025 15:13:01 +0000 (0:00:00.095) 0:03:11.311 ********* 2025-08-29 15:13:59.206783 | orchestrator | 2025-08-29 15:13:59.206787 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-08-29 15:13:59.206790 | orchestrator | Friday 29 August 2025 15:13:02 +0000 (0:00:00.138) 0:03:11.449 ********* 2025-08-29 15:13:59.206794 | orchestrator | 2025-08-29 15:13:59.206798 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-08-29 15:13:59.206802 | orchestrator | Friday 29 August 2025 15:13:02 +0000 (0:00:00.221) 0:03:11.670 ********* 2025-08-29 15:13:59.206805 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:13:59.206809 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:13:59.206813 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:13:59.206817 | orchestrator | 2025-08-29 15:13:59.206820 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-08-29 15:13:59.206824 | orchestrator | Friday 29 August 2025 15:13:29 +0000 (0:00:26.941) 0:03:38.612 ********* 2025-08-29 15:13:59.206828 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:13:59.206832 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:13:59.206835 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:13:59.206839 | orchestrator | 2025-08-29 15:13:59.206843 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:13:59.206847 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-08-29 15:13:59.206853 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-08-29 15:13:59.206861 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-08-29 15:13:59.206865 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-08-29 15:13:59.206869 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-08-29 15:13:59.206879 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-08-29 15:13:59.206882 | orchestrator | 2025-08-29 15:13:59.206887 | orchestrator | 2025-08-29 15:13:59.206890 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:13:59.206894 | orchestrator | Friday 29 August 2025 15:13:55 +0000 (0:00:26.709) 0:04:05.322 ********* 2025-08-29 15:13:59.206898 | orchestrator | =============================================================================== 2025-08-29 15:13:59.206901 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 40.16s 2025-08-29 15:13:59.206905 | orchestrator | neutron : Restart neutron-server container ----------------------------- 26.94s 2025-08-29 15:13:59.206909 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 26.71s 2025-08-29 15:13:59.206913 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.88s 2025-08-29 15:13:59.206916 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 7.88s 2025-08-29 15:13:59.206920 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.62s 2025-08-29 15:13:59.206924 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.76s 2025-08-29 15:13:59.206927 | orchestrator | neutron : Copying over neutron-tls-proxy.cfg ---------------------------- 4.28s 2025-08-29 15:13:59.206931 | orchestrator | neutron : Copying over linuxbridge_agent.ini ---------------------------- 4.18s 2025-08-29 15:13:59.206935 | orchestrator | neutron : Copying over config.json files for services ------------------- 4.11s 2025-08-29 15:13:59.206938 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 4.02s 2025-08-29 15:13:59.206942 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.86s 2025-08-29 15:13:59.206946 | orchestrator | neutron : Copying over nsx.ini ------------------------------------------ 3.79s 2025-08-29 15:13:59.206949 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 3.77s 2025-08-29 15:13:59.206953 | orchestrator | neutron : Copying over extra ml2 plugins -------------------------------- 3.62s 2025-08-29 15:13:59.206957 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.47s 2025-08-29 15:13:59.206960 | orchestrator | neutron : Copying over neutron_taas.conf -------------------------------- 3.46s 2025-08-29 15:13:59.206964 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 3.43s 2025-08-29 15:13:59.206968 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.42s 2025-08-29 15:13:59.206972 | orchestrator | neutron : Copying over fwaas_driver.ini --------------------------------- 3.41s 2025-08-29 15:13:59.206975 | orchestrator | 2025-08-29 15:13:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:02.238750 | orchestrator | 2025-08-29 15:14:02 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:14:02.242434 | orchestrator | 2025-08-29 15:14:02 | INFO  | Task caf7e1e6-bf31-4657-a700-5410817050ed is in state STARTED 2025-08-29 15:14:02.244544 | orchestrator | 2025-08-29 15:14:02 | INFO  | Task 67d273c6-2966-43b1-9dc9-bda251e7cbe1 is in state STARTED 2025-08-29 15:14:02.246218 | orchestrator | 2025-08-29 15:14:02 | INFO  | Task 5898feea-50e0-4b8f-af79-0537f85d6869 is in state STARTED 2025-08-29 15:14:02.246645 | orchestrator | 2025-08-29 15:14:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:05.293177 | orchestrator | 2025-08-29 15:14:05 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:14:05.295516 | orchestrator | 2025-08-29 15:14:05 | INFO  | Task caf7e1e6-bf31-4657-a700-5410817050ed is in state STARTED 2025-08-29 15:14:05.297948 | orchestrator | 2025-08-29 15:14:05 | INFO  | Task 67d273c6-2966-43b1-9dc9-bda251e7cbe1 is in state STARTED 2025-08-29 15:14:05.299342 | orchestrator | 2025-08-29 15:14:05 | INFO  | Task 5898feea-50e0-4b8f-af79-0537f85d6869 is in state STARTED 2025-08-29 15:14:05.299453 | orchestrator | 2025-08-29 15:14:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:08.340282 | orchestrator | 2025-08-29 15:14:08 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:14:08.341815 | orchestrator | 2025-08-29 15:14:08 | INFO  | Task caf7e1e6-bf31-4657-a700-5410817050ed is in state STARTED 2025-08-29 15:14:08.343332 | orchestrator | 2025-08-29 15:14:08 | INFO  | Task 67d273c6-2966-43b1-9dc9-bda251e7cbe1 is in state STARTED 2025-08-29 15:14:08.345007 | orchestrator | 2025-08-29 15:14:08 | INFO  | Task 5898feea-50e0-4b8f-af79-0537f85d6869 is in state STARTED 2025-08-29 15:14:08.345305 | orchestrator | 2025-08-29 15:14:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:11.393069 | orchestrator | 2025-08-29 15:14:11 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:14:11.394650 | orchestrator | 2025-08-29 15:14:11 | INFO  | Task caf7e1e6-bf31-4657-a700-5410817050ed is in state STARTED 2025-08-29 15:14:11.395411 | orchestrator | 2025-08-29 15:14:11 | INFO  | Task 67d273c6-2966-43b1-9dc9-bda251e7cbe1 is in state STARTED 2025-08-29 15:14:11.396605 | orchestrator | 2025-08-29 15:14:11 | INFO  | Task 5898feea-50e0-4b8f-af79-0537f85d6869 is in state STARTED 2025-08-29 15:14:11.396692 | orchestrator | 2025-08-29 15:14:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:14.436077 | orchestrator | 2025-08-29 15:14:14 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:14:14.436674 | orchestrator | 2025-08-29 15:14:14 | INFO  | Task caf7e1e6-bf31-4657-a700-5410817050ed is in state STARTED 2025-08-29 15:14:14.440814 | orchestrator | 2025-08-29 15:14:14 | INFO  | Task 67d273c6-2966-43b1-9dc9-bda251e7cbe1 is in state STARTED 2025-08-29 15:14:14.444249 | orchestrator | 2025-08-29 15:14:14 | INFO  | Task 5898feea-50e0-4b8f-af79-0537f85d6869 is in state STARTED 2025-08-29 15:14:14.444310 | orchestrator | 2025-08-29 15:14:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:17.477444 | orchestrator | 2025-08-29 15:14:17 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:14:17.479919 | orchestrator | 2025-08-29 15:14:17 | INFO  | Task caf7e1e6-bf31-4657-a700-5410817050ed is in state STARTED 2025-08-29 15:14:17.481444 | orchestrator | 2025-08-29 15:14:17 | INFO  | Task 67d273c6-2966-43b1-9dc9-bda251e7cbe1 is in state STARTED 2025-08-29 15:14:17.483614 | orchestrator | 2025-08-29 15:14:17 | INFO  | Task 5898feea-50e0-4b8f-af79-0537f85d6869 is in state STARTED 2025-08-29 15:14:17.483647 | orchestrator | 2025-08-29 15:14:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:20.522165 | orchestrator | 2025-08-29 15:14:20 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:14:20.523608 | orchestrator | 2025-08-29 15:14:20 | INFO  | Task caf7e1e6-bf31-4657-a700-5410817050ed is in state STARTED 2025-08-29 15:14:20.525695 | orchestrator | 2025-08-29 15:14:20 | INFO  | Task 67d273c6-2966-43b1-9dc9-bda251e7cbe1 is in state STARTED 2025-08-29 15:14:20.527637 | orchestrator | 2025-08-29 15:14:20 | INFO  | Task 5898feea-50e0-4b8f-af79-0537f85d6869 is in state STARTED 2025-08-29 15:14:20.527737 | orchestrator | 2025-08-29 15:14:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:23.586090 | orchestrator | 2025-08-29 15:14:23 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:14:23.592104 | orchestrator | 2025-08-29 15:14:23 | INFO  | Task caf7e1e6-bf31-4657-a700-5410817050ed is in state STARTED 2025-08-29 15:14:23.595960 | orchestrator | 2025-08-29 15:14:23 | INFO  | Task 67d273c6-2966-43b1-9dc9-bda251e7cbe1 is in state STARTED 2025-08-29 15:14:23.599210 | orchestrator | 2025-08-29 15:14:23 | INFO  | Task 5898feea-50e0-4b8f-af79-0537f85d6869 is in state STARTED 2025-08-29 15:14:23.599445 | orchestrator | 2025-08-29 15:14:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:26.642322 | orchestrator | 2025-08-29 15:14:26 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:14:26.642968 | orchestrator | 2025-08-29 15:14:26 | INFO  | Task f7f68397-c8dd-4d7e-965c-1c18ffb76152 is in state STARTED 2025-08-29 15:14:26.645564 | orchestrator | 2025-08-29 15:14:26 | INFO  | Task caf7e1e6-bf31-4657-a700-5410817050ed is in state STARTED 2025-08-29 15:14:26.646817 | orchestrator | 2025-08-29 15:14:26 | INFO  | Task 67d273c6-2966-43b1-9dc9-bda251e7cbe1 is in state SUCCESS 2025-08-29 15:14:26.649059 | orchestrator | 2025-08-29 15:14:26.649131 | orchestrator | 2025-08-29 15:14:26.649143 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:14:26.649150 | orchestrator | 2025-08-29 15:14:26.649157 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:14:26.649163 | orchestrator | Friday 29 August 2025 15:13:09 +0000 (0:00:00.655) 0:00:00.655 ********* 2025-08-29 15:14:26.649169 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:14:26.649176 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:14:26.649184 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:14:26.649192 | orchestrator | 2025-08-29 15:14:26.649198 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:14:26.649204 | orchestrator | Friday 29 August 2025 15:13:09 +0000 (0:00:00.671) 0:00:01.326 ********* 2025-08-29 15:14:26.649211 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-08-29 15:14:26.649217 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-08-29 15:14:26.649223 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-08-29 15:14:26.649228 | orchestrator | 2025-08-29 15:14:26.649234 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-08-29 15:14:26.649240 | orchestrator | 2025-08-29 15:14:26.649246 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-08-29 15:14:26.649252 | orchestrator | Friday 29 August 2025 15:13:11 +0000 (0:00:01.127) 0:00:02.454 ********* 2025-08-29 15:14:26.649259 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:14:26.649266 | orchestrator | 2025-08-29 15:14:26.649274 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-08-29 15:14:26.649281 | orchestrator | Friday 29 August 2025 15:13:12 +0000 (0:00:01.085) 0:00:03.539 ********* 2025-08-29 15:14:26.649289 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-08-29 15:14:26.649295 | orchestrator | 2025-08-29 15:14:26.649302 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-08-29 15:14:26.649308 | orchestrator | Friday 29 August 2025 15:13:15 +0000 (0:00:03.703) 0:00:07.243 ********* 2025-08-29 15:14:26.649314 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-08-29 15:14:26.649321 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-08-29 15:14:26.649327 | orchestrator | 2025-08-29 15:14:26.649333 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-08-29 15:14:26.649339 | orchestrator | Friday 29 August 2025 15:13:22 +0000 (0:00:06.684) 0:00:13.928 ********* 2025-08-29 15:14:26.649346 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 15:14:26.649353 | orchestrator | 2025-08-29 15:14:26.649360 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-08-29 15:14:26.649389 | orchestrator | Friday 29 August 2025 15:13:25 +0000 (0:00:02.931) 0:00:16.859 ********* 2025-08-29 15:14:26.649393 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 15:14:26.649398 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-08-29 15:14:26.649401 | orchestrator | 2025-08-29 15:14:26.649405 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-08-29 15:14:26.649409 | orchestrator | Friday 29 August 2025 15:13:28 +0000 (0:00:03.408) 0:00:20.268 ********* 2025-08-29 15:14:26.649413 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 15:14:26.649417 | orchestrator | 2025-08-29 15:14:26.649421 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-08-29 15:14:26.649426 | orchestrator | Friday 29 August 2025 15:13:31 +0000 (0:00:02.689) 0:00:22.957 ********* 2025-08-29 15:14:26.649429 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-08-29 15:14:26.649433 | orchestrator | 2025-08-29 15:14:26.649437 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-08-29 15:14:26.649441 | orchestrator | Friday 29 August 2025 15:13:36 +0000 (0:00:04.860) 0:00:27.818 ********* 2025-08-29 15:14:26.649445 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:14:26.649448 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:14:26.649452 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:14:26.649456 | orchestrator | 2025-08-29 15:14:26.649459 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-08-29 15:14:26.649463 | orchestrator | Friday 29 August 2025 15:13:37 +0000 (0:00:00.879) 0:00:28.698 ********* 2025-08-29 15:14:26.649480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 15:14:26.649499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 15:14:26.649503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 15:14:26.649511 | orchestrator | 2025-08-29 15:14:26.649515 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-08-29 15:14:26.649519 | orchestrator | Friday 29 August 2025 15:13:39 +0000 (0:00:01.965) 0:00:30.663 ********* 2025-08-29 15:14:26.649523 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:14:26.649526 | orchestrator | 2025-08-29 15:14:26.649530 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-08-29 15:14:26.649534 | orchestrator | Friday 29 August 2025 15:13:39 +0000 (0:00:00.276) 0:00:30.940 ********* 2025-08-29 15:14:26.649538 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:14:26.649541 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:14:26.649545 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:14:26.649549 | orchestrator | 2025-08-29 15:14:26.649552 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-08-29 15:14:26.649556 | orchestrator | Friday 29 August 2025 15:13:40 +0000 (0:00:00.885) 0:00:31.825 ********* 2025-08-29 15:14:26.649560 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:14:26.649564 | orchestrator | 2025-08-29 15:14:26.649567 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-08-29 15:14:26.649571 | orchestrator | Friday 29 August 2025 15:13:41 +0000 (0:00:00.644) 0:00:32.469 ********* 2025-08-29 15:14:26.649578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 15:14:26.649586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 15:14:26.649591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 15:14:26.649598 | orchestrator | 2025-08-29 15:14:26.649602 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-08-29 15:14:26.649606 | orchestrator | Friday 29 August 2025 15:13:42 +0000 (0:00:01.568) 0:00:34.038 ********* 2025-08-29 15:14:26.649610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 15:14:26.649614 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:14:26.649618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 15:14:26.649622 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:14:26.649631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 15:14:26.649636 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:14:26.649639 | orchestrator | 2025-08-29 15:14:26.649643 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-08-29 15:14:26.649647 | orchestrator | Friday 29 August 2025 15:13:43 +0000 (0:00:00.793) 0:00:34.832 ********* 2025-08-29 15:14:26.649651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 15:14:26.649658 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:14:26.649662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 15:14:26.649666 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:14:26.649712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 15:14:26.649717 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:14:26.649721 | orchestrator | 2025-08-29 15:14:26.649725 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-08-29 15:14:26.649728 | orchestrator | Friday 29 August 2025 15:13:44 +0000 (0:00:00.688) 0:00:35.521 ********* 2025-08-29 15:14:26.649739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 15:14:26.649801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 15:14:26.649808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 15:14:26.649814 | orchestrator | 2025-08-29 15:14:26.649820 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-08-29 15:14:26.649826 | orchestrator | Friday 29 August 2025 15:13:45 +0000 (0:00:01.415) 0:00:36.936 ********* 2025-08-29 15:14:26.649856 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 15:14:26.649875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 15:14:26.649935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 15:14:26.649949 | orchestrator | 2025-08-29 15:14:26.650152 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-08-29 15:14:26.650172 | orchestrator | Friday 29 August 2025 15:13:47 +0000 (0:00:02.384) 0:00:39.321 ********* 2025-08-29 15:14:26.650179 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-08-29 15:14:26.650187 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-08-29 15:14:26.650194 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-08-29 15:14:26.650201 | orchestrator | 2025-08-29 15:14:26.650206 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-08-29 15:14:26.650210 | orchestrator | Friday 29 August 2025 15:13:49 +0000 (0:00:01.848) 0:00:41.169 ********* 2025-08-29 15:14:26.650214 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:14:26.650219 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:14:26.650222 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:14:26.650226 | orchestrator | 2025-08-29 15:14:26.650230 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-08-29 15:14:26.650234 | orchestrator | Friday 29 August 2025 15:13:51 +0000 (0:00:02.179) 0:00:43.349 ********* 2025-08-29 15:14:26.650238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 15:14:26.650243 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:14:26.650247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 15:14:26.650258 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:14:26.650275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-08-29 15:14:26.650280 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:14:26.650283 | orchestrator | 2025-08-29 15:14:26.650287 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-08-29 15:14:26.650291 | orchestrator | Friday 29 August 2025 15:13:53 +0000 (0:00:01.172) 0:00:44.521 ********* 2025-08-29 15:14:26.650295 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 15:14:26.650299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 15:14:26.650303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-08-29 15:14:26.650311 | orchestrator | 2025-08-29 15:14:26.650315 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-08-29 15:14:26.650318 | orchestrator | Friday 29 August 2025 15:13:54 +0000 (0:00:01.721) 0:00:46.243 ********* 2025-08-29 15:14:26.650322 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:14:26.650326 | orchestrator | 2025-08-29 15:14:26.650330 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-08-29 15:14:26.650333 | orchestrator | Friday 29 August 2025 15:13:57 +0000 (0:00:02.601) 0:00:48.845 ********* 2025-08-29 15:14:26.650340 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:14:26.650344 | orchestrator | 2025-08-29 15:14:26.650347 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-08-29 15:14:26.650351 | orchestrator | Friday 29 August 2025 15:14:00 +0000 (0:00:02.626) 0:00:51.471 ********* 2025-08-29 15:14:26.650355 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:14:26.650403 | orchestrator | 2025-08-29 15:14:26.650408 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-08-29 15:14:26.650412 | orchestrator | Friday 29 August 2025 15:14:13 +0000 (0:00:13.799) 0:01:05.270 ********* 2025-08-29 15:14:26.650416 | orchestrator | 2025-08-29 15:14:26.650420 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-08-29 15:14:26.650423 | orchestrator | Friday 29 August 2025 15:14:13 +0000 (0:00:00.071) 0:01:05.342 ********* 2025-08-29 15:14:26.650838 | orchestrator | 2025-08-29 15:14:26.650849 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-08-29 15:14:26.650854 | orchestrator | Friday 29 August 2025 15:14:14 +0000 (0:00:00.082) 0:01:05.425 ********* 2025-08-29 15:14:26.650857 | orchestrator | 2025-08-29 15:14:26.650861 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-08-29 15:14:26.650865 | orchestrator | Friday 29 August 2025 15:14:14 +0000 (0:00:00.082) 0:01:05.507 ********* 2025-08-29 15:14:26.650869 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:14:26.650873 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:14:26.650877 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:14:26.650881 | orchestrator | 2025-08-29 15:14:26.650885 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:14:26.650891 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 15:14:26.650896 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 15:14:26.650900 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 15:14:26.650904 | orchestrator | 2025-08-29 15:14:26.650908 | orchestrator | 2025-08-29 15:14:26.650911 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:14:26.650915 | orchestrator | Friday 29 August 2025 15:14:25 +0000 (0:00:11.378) 0:01:16.885 ********* 2025-08-29 15:14:26.650919 | orchestrator | =============================================================================== 2025-08-29 15:14:26.650922 | orchestrator | placement : Running placement bootstrap container ---------------------- 13.80s 2025-08-29 15:14:26.650926 | orchestrator | placement : Restart placement-api container ---------------------------- 11.38s 2025-08-29 15:14:26.650930 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.68s 2025-08-29 15:14:26.650934 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.86s 2025-08-29 15:14:26.650937 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.70s 2025-08-29 15:14:26.650941 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.41s 2025-08-29 15:14:26.650945 | orchestrator | service-ks-register : placement | Creating projects --------------------- 2.93s 2025-08-29 15:14:26.650948 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 2.69s 2025-08-29 15:14:26.650960 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.63s 2025-08-29 15:14:26.650963 | orchestrator | placement : Creating placement databases -------------------------------- 2.60s 2025-08-29 15:14:26.650967 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.38s 2025-08-29 15:14:26.650971 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 2.18s 2025-08-29 15:14:26.650974 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.97s 2025-08-29 15:14:26.650978 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.85s 2025-08-29 15:14:26.650982 | orchestrator | placement : Check placement containers ---------------------------------- 1.72s 2025-08-29 15:14:26.650986 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.57s 2025-08-29 15:14:26.650989 | orchestrator | placement : Copying over config.json files for services ----------------- 1.42s 2025-08-29 15:14:26.650993 | orchestrator | placement : Copying over existing policy file --------------------------- 1.17s 2025-08-29 15:14:26.650997 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.13s 2025-08-29 15:14:26.651001 | orchestrator | placement : include_tasks ----------------------------------------------- 1.09s 2025-08-29 15:14:26.651005 | orchestrator | 2025-08-29 15:14:26 | INFO  | Task 5898feea-50e0-4b8f-af79-0537f85d6869 is in state SUCCESS 2025-08-29 15:14:26.651008 | orchestrator | 2025-08-29 15:14:26.651012 | orchestrator | 2025-08-29 15:14:26.651016 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:14:26.651019 | orchestrator | 2025-08-29 15:14:26.651023 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:14:26.651027 | orchestrator | Friday 29 August 2025 15:11:21 +0000 (0:00:00.728) 0:00:00.728 ********* 2025-08-29 15:14:26.651031 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:14:26.651074 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:14:26.651080 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:14:26.651084 | orchestrator | 2025-08-29 15:14:26.651087 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:14:26.651091 | orchestrator | Friday 29 August 2025 15:11:22 +0000 (0:00:00.570) 0:00:01.299 ********* 2025-08-29 15:14:26.651095 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-08-29 15:14:26.651105 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-08-29 15:14:26.651166 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-08-29 15:14:26.651175 | orchestrator | 2025-08-29 15:14:26.651182 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-08-29 15:14:26.651188 | orchestrator | 2025-08-29 15:14:26.651194 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-08-29 15:14:26.651200 | orchestrator | Friday 29 August 2025 15:11:22 +0000 (0:00:00.392) 0:00:01.692 ********* 2025-08-29 15:14:26.651205 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:14:26.651210 | orchestrator | 2025-08-29 15:14:26.651213 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-08-29 15:14:26.651223 | orchestrator | Friday 29 August 2025 15:11:23 +0000 (0:00:00.641) 0:00:02.333 ********* 2025-08-29 15:14:26.651227 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-08-29 15:14:26.651231 | orchestrator | 2025-08-29 15:14:26.651234 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-08-29 15:14:26.651238 | orchestrator | Friday 29 August 2025 15:11:27 +0000 (0:00:03.818) 0:00:06.151 ********* 2025-08-29 15:14:26.651242 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-08-29 15:14:26.651246 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-08-29 15:14:26.651250 | orchestrator | 2025-08-29 15:14:26.651253 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-08-29 15:14:26.651263 | orchestrator | Friday 29 August 2025 15:11:34 +0000 (0:00:06.839) 0:00:12.990 ********* 2025-08-29 15:14:26.651267 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 15:14:26.651271 | orchestrator | 2025-08-29 15:14:26.651274 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-08-29 15:14:26.651278 | orchestrator | Friday 29 August 2025 15:11:37 +0000 (0:00:03.232) 0:00:16.223 ********* 2025-08-29 15:14:26.651394 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 15:14:26.651405 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-08-29 15:14:26.651655 | orchestrator | 2025-08-29 15:14:26.651663 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-08-29 15:14:26.651668 | orchestrator | Friday 29 August 2025 15:11:41 +0000 (0:00:03.844) 0:00:20.067 ********* 2025-08-29 15:14:26.651672 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 15:14:26.651677 | orchestrator | 2025-08-29 15:14:26.651681 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-08-29 15:14:26.651686 | orchestrator | Friday 29 August 2025 15:11:44 +0000 (0:00:03.398) 0:00:23.466 ********* 2025-08-29 15:14:26.651691 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-08-29 15:14:26.651695 | orchestrator | 2025-08-29 15:14:26.651699 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-08-29 15:14:26.651704 | orchestrator | Friday 29 August 2025 15:11:49 +0000 (0:00:04.527) 0:00:27.994 ********* 2025-08-29 15:14:26.651710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 15:14:26.651717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 15:14:26.651741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 15:14:26.651762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:14:26.651770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:14:26.651777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:14:26.651784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:26.651793 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:26.651803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:26.651821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:26.651828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:26.651835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:26.651841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:26.651848 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:26.651855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:26.651865 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:26.651884 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:26.651889 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:26.651893 | orchestrator | 2025-08-29 15:14:26.651897 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-08-29 15:14:26.651901 | orchestrator | Friday 29 August 2025 15:11:53 +0000 (0:00:04.205) 0:00:32.199 ********* 2025-08-29 15:14:26.651905 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:14:26.651909 | orchestrator | 2025-08-29 15:14:26.651912 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-08-29 15:14:26.651916 | orchestrator | Friday 29 August 2025 15:11:53 +0000 (0:00:00.223) 0:00:32.423 ********* 2025-08-29 15:14:26.651920 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:14:26.651924 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:14:26.652181 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:14:26.652193 | orchestrator | 2025-08-29 15:14:26.652197 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-08-29 15:14:26.652201 | orchestrator | Friday 29 August 2025 15:11:54 +0000 (0:00:00.348) 0:00:32.771 ********* 2025-08-29 15:14:26.652205 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:14:26.652209 | orchestrator | 2025-08-29 15:14:26.652213 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-08-29 15:14:26.652217 | orchestrator | Friday 29 August 2025 15:11:54 +0000 (0:00:00.786) 0:00:33.557 ********* 2025-08-29 15:14:26.652221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 15:14:26.652242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 15:14:26.652524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 15:14:26.652548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:14:26.652553 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:14:26.652557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:14:26.652561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:26.652579 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:26.652583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:26.652604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:26.652609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:26.652613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:26.652617 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:26.652621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:26.652631 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:26.652645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:26.652649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:26.652653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:26.652657 | orchestrator | 2025-08-29 15:14:26.652661 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-08-29 15:14:26.652665 | orchestrator | Friday 29 August 2025 15:12:03 +0000 (0:00:08.868) 0:00:42.426 ********* 2025-08-29 15:14:26.652669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 15:14:26.652674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 15:14:26.652684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 15:14:26.652688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 15:14:26.652703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 15:14:26.652708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:14:26.652712 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:14:26.652716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 15:14:26.652720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 15:14:26.652727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 15:14:26.652764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 15:14:26.652769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 15:14:26.652773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 15:14:26.652777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 15:14:26.652781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 15:14:26.652789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:14:26.652795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 15:14:26.652811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 15:14:26.652815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:14:26.652819 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:14:26.652823 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:14:26.652827 | orchestrator | 2025-08-29 15:14:26.652831 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-08-29 15:14:26.652835 | orchestrator | Friday 29 August 2025 15:12:04 +0000 (0:00:01.170) 0:00:43.596 ********* 2025-08-29 15:14:26.652839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 15:14:26.652846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 15:14:26.652850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 15:14:26.652857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 15:14:26.652874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 15:14:26.652879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:14:26.652883 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:14:26.652887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 15:14:26.652929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 15:14:26.652934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 15:14:26.652947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 15:14:26.652971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 15:14:26.653017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 15:14:26.653024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 15:14:26.653035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 15:14:26.653042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 15:14:26.653048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:14:26.653057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 15:14:26.653081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:14:26.653088 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:14:26.653094 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:14:26.653099 | orchestrator | 2025-08-29 15:14:26.653105 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-08-29 15:14:26.653128 | orchestrator | Friday 29 August 2025 15:12:08 +0000 (0:00:03.317) 0:00:46.914 ********* 2025-08-29 15:14:26.653134 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 15:14:26.653148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 15:14:26.653156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 15:14:26.653166 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:14:26.653189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:14:26.653197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:14:26.653210 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:26.653217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:26.653223 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:26.653233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:26.653240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:26.653263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:26.653269 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:26.653278 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:26.653282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:26.653287 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:26.653294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:26.653312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:26.653317 | orchestrator | 2025-08-29 15:14:26.653322 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-08-29 15:14:26.653326 | orchestrator | Friday 29 August 2025 15:12:15 +0000 (0:00:07.225) 0:00:54.140 ********* 2025-08-29 15:14:26.653330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 15:14:26.653339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 15:14:26.653343 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 15:14:26.653349 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend2025-08-29 15:14:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:26.653358 | orchestrator | _bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:14:26.653365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:14:26.653370 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:14:26.653378 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:26.653383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:26.653388 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:26.653392 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:26.653399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:26.653407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:26.653415 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:26.653419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:26.653424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:26.653428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:26.653433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:26.653440 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:26.653444 | orchestrator | 2025-08-29 15:14:26.653448 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-08-29 15:14:26.653453 | orchestrator | Friday 29 August 2025 15:12:31 +0000 (0:00:16.281) 0:01:10.421 ********* 2025-08-29 15:14:26.653463 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-08-29 15:14:26.653468 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-08-29 15:14:26.653474 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-08-29 15:14:26.653479 | orchestrator | 2025-08-29 15:14:26.653483 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-08-29 15:14:26.653487 | orchestrator | Friday 29 August 2025 15:12:36 +0000 (0:00:04.829) 0:01:15.250 ********* 2025-08-29 15:14:26.653492 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-08-29 15:14:26.653496 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-08-29 15:14:26.653500 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-08-29 15:14:26.653504 | orchestrator | 2025-08-29 15:14:26.653509 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-08-29 15:14:26.653513 | orchestrator | Friday 29 August 2025 15:12:39 +0000 (0:00:02.564) 0:01:17.815 ********* 2025-08-29 15:14:26.653517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 15:14:26.653521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 15:14:26.653525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 15:14:26.653532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:14:26.653542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 15:14:26.653574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 15:14:26.653579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 15:14:26.653583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:14:26.653587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 15:14:26.653591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 15:14:26.653604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:14:26.653610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 15:14:26.653614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 15:14:26.653618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 15:14:26.653622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 15:14:26.653626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:26.653632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:26.653672 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:26.653678 | orchestrator | 2025-08-29 15:14:26.653682 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-08-29 15:14:26.653685 | orchestrator | Friday 29 August 2025 15:12:41 +0000 (0:00:02.659) 0:01:20.474 ********* 2025-08-29 15:14:26.653689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 15:14:26.653694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 15:14:26.653698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 15:14:26.653706 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:14:26.653713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 15:14:26.653721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 15:14:26.653725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 15:14:26.653729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:14:26.653733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 15:14:26.653737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 15:14:26.653748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 15:14:26.653756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:14:26.653760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 15:14:26.653765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 15:14:26.653768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 15:14:26.653772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:26.653781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:26.653787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:26.653791 | orchestrator | 2025-08-29 15:14:26.653795 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-08-29 15:14:26.653799 | orchestrator | Friday 29 August 2025 15:12:44 +0000 (0:00:02.580) 0:01:23.055 ********* 2025-08-29 15:14:26.653803 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:14:26.653807 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:14:26.653811 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:14:26.653815 | orchestrator | 2025-08-29 15:14:26.653818 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-08-29 15:14:26.653825 | orchestrator | Friday 29 August 2025 15:12:44 +0000 (0:00:00.290) 0:01:23.346 ********* 2025-08-29 15:14:26.653829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 15:14:26.653833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 15:14:26.653837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 15:14:26.653845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 15:14:26.653849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 15:14:26.653856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:14:26.653860 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:14:26.653867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 15:14:26.653871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 15:14:26.653875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 15:14:26.653886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 15:14:26.653893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 15:14:26.653904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:14:26.653914 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:14:26.653925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-08-29 15:14:26.653931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-08-29 15:14:26.653937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-08-29 15:14:26.653966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-08-29 15:14:26.653972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-08-29 15:14:26.653982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:14:26.653990 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:14:26.653996 | orchestrator | 2025-08-29 15:14:26.654004 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-08-29 15:14:26.654054 | orchestrator | Friday 29 August 2025 15:12:46 +0000 (0:00:02.187) 0:01:25.534 ********* 2025-08-29 15:14:26.654071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 15:14:26.654078 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 15:14:26.654090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-08-29 15:14:26.654096 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:14:26.654106 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:14:26.654138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-08-29 15:14:26.654194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:26.654201 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:26.654210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:26.654214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:26.654219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:26.654226 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:26.654302 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:26.654309 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:26.654313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:26.654321 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:26.654325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:26.654329 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:14:26.654333 | orchestrator | 2025-08-29 15:14:26.654337 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-08-29 15:14:26.654341 | orchestrator | Friday 29 August 2025 15:12:52 +0000 (0:00:05.385) 0:01:30.919 ********* 2025-08-29 15:14:26.654344 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:14:26.654348 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:14:26.654352 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:14:26.654356 | orchestrator | 2025-08-29 15:14:26.654363 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-08-29 15:14:26.654367 | orchestrator | Friday 29 August 2025 15:12:52 +0000 (0:00:00.340) 0:01:31.260 ********* 2025-08-29 15:14:26.654370 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-08-29 15:14:26.654375 | orchestrator | 2025-08-29 15:14:26.654378 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-08-29 15:14:26.654382 | orchestrator | Friday 29 August 2025 15:12:54 +0000 (0:00:02.250) 0:01:33.511 ********* 2025-08-29 15:14:26.654386 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 15:14:26.654390 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-08-29 15:14:26.654394 | orchestrator | 2025-08-29 15:14:26.654398 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-08-29 15:14:26.654401 | orchestrator | Friday 29 August 2025 15:12:57 +0000 (0:00:02.796) 0:01:36.307 ********* 2025-08-29 15:14:26.654408 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:14:26.654412 | orchestrator | 2025-08-29 15:14:26.654416 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-08-29 15:14:26.654420 | orchestrator | Friday 29 August 2025 15:13:12 +0000 (0:00:14.700) 0:01:51.007 ********* 2025-08-29 15:14:26.654423 | orchestrator | 2025-08-29 15:14:26.654427 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-08-29 15:14:26.654435 | orchestrator | Friday 29 August 2025 15:13:12 +0000 (0:00:00.376) 0:01:51.383 ********* 2025-08-29 15:14:26.654439 | orchestrator | 2025-08-29 15:14:26.654442 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-08-29 15:14:26.654446 | orchestrator | Friday 29 August 2025 15:13:12 +0000 (0:00:00.052) 0:01:51.436 ********* 2025-08-29 15:14:26.654450 | orchestrator | 2025-08-29 15:14:26.654454 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-08-29 15:14:26.654458 | orchestrator | Friday 29 August 2025 15:13:12 +0000 (0:00:00.054) 0:01:51.490 ********* 2025-08-29 15:14:26.654461 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:14:26.654465 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:14:26.654469 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:14:26.654472 | orchestrator | 2025-08-29 15:14:26.654476 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-08-29 15:14:26.654480 | orchestrator | Friday 29 August 2025 15:13:21 +0000 (0:00:08.559) 0:02:00.050 ********* 2025-08-29 15:14:26.654484 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:14:26.654487 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:14:26.654491 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:14:26.654495 | orchestrator | 2025-08-29 15:14:26.654498 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-08-29 15:14:26.654502 | orchestrator | Friday 29 August 2025 15:13:34 +0000 (0:00:13.077) 0:02:13.128 ********* 2025-08-29 15:14:26.654506 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:14:26.654509 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:14:26.654513 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:14:26.654517 | orchestrator | 2025-08-29 15:14:26.654520 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-08-29 15:14:26.654524 | orchestrator | Friday 29 August 2025 15:13:49 +0000 (0:00:15.224) 0:02:28.353 ********* 2025-08-29 15:14:26.654528 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:14:26.654532 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:14:26.654535 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:14:26.654539 | orchestrator | 2025-08-29 15:14:26.654543 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-08-29 15:14:26.654547 | orchestrator | Friday 29 August 2025 15:13:58 +0000 (0:00:08.853) 0:02:37.206 ********* 2025-08-29 15:14:26.654550 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:14:26.654554 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:14:26.654558 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:14:26.654562 | orchestrator | 2025-08-29 15:14:26.654566 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-08-29 15:14:26.654569 | orchestrator | Friday 29 August 2025 15:14:09 +0000 (0:00:11.008) 0:02:48.215 ********* 2025-08-29 15:14:26.654573 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:14:26.654577 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:14:26.654581 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:14:26.654584 | orchestrator | 2025-08-29 15:14:26.654588 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-08-29 15:14:26.654592 | orchestrator | Friday 29 August 2025 15:14:17 +0000 (0:00:07.782) 0:02:55.998 ********* 2025-08-29 15:14:26.654596 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:14:26.654599 | orchestrator | 2025-08-29 15:14:26.654603 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:14:26.654608 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-08-29 15:14:26.654614 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 15:14:26.654618 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 15:14:26.654626 | orchestrator | 2025-08-29 15:14:26.654630 | orchestrator | 2025-08-29 15:14:26.654633 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:14:26.654637 | orchestrator | Friday 29 August 2025 15:14:24 +0000 (0:00:07.246) 0:03:03.244 ********* 2025-08-29 15:14:26.654641 | orchestrator | =============================================================================== 2025-08-29 15:14:26.654645 | orchestrator | designate : Copying over designate.conf -------------------------------- 16.28s 2025-08-29 15:14:26.654648 | orchestrator | designate : Restart designate-central container ------------------------ 15.23s 2025-08-29 15:14:26.654652 | orchestrator | designate : Running Designate bootstrap container ---------------------- 14.70s 2025-08-29 15:14:26.654658 | orchestrator | designate : Restart designate-api container ---------------------------- 13.08s 2025-08-29 15:14:26.654662 | orchestrator | designate : Restart designate-mdns container --------------------------- 11.01s 2025-08-29 15:14:26.654666 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 8.87s 2025-08-29 15:14:26.654670 | orchestrator | designate : Restart designate-producer container ------------------------ 8.85s 2025-08-29 15:14:26.654673 | orchestrator | designate : Restart designate-backend-bind9 container ------------------- 8.56s 2025-08-29 15:14:26.654677 | orchestrator | designate : Restart designate-worker container -------------------------- 7.78s 2025-08-29 15:14:26.654681 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.25s 2025-08-29 15:14:26.654685 | orchestrator | designate : Copying over config.json files for services ----------------- 7.23s 2025-08-29 15:14:26.654691 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.84s 2025-08-29 15:14:26.654695 | orchestrator | designate : Check designate containers ---------------------------------- 5.39s 2025-08-29 15:14:26.654699 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 4.83s 2025-08-29 15:14:26.654703 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.53s 2025-08-29 15:14:26.654706 | orchestrator | designate : Ensuring config directories exist --------------------------- 4.21s 2025-08-29 15:14:26.654710 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.84s 2025-08-29 15:14:26.654714 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.82s 2025-08-29 15:14:26.654718 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.40s 2025-08-29 15:14:26.654722 | orchestrator | service-cert-copy : designate | Copying over backend internal TLS key --- 3.32s 2025-08-29 15:14:29.690654 | orchestrator | 2025-08-29 15:14:29 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:14:29.691242 | orchestrator | 2025-08-29 15:14:29 | INFO  | Task f7f68397-c8dd-4d7e-965c-1c18ffb76152 is in state STARTED 2025-08-29 15:14:29.691980 | orchestrator | 2025-08-29 15:14:29 | INFO  | Task caf7e1e6-bf31-4657-a700-5410817050ed is in state STARTED 2025-08-29 15:14:29.692832 | orchestrator | 2025-08-29 15:14:29 | INFO  | Task bd4efc21-f9a8-4d2c-a6e7-897975b20ff7 is in state STARTED 2025-08-29 15:14:29.692862 | orchestrator | 2025-08-29 15:14:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:32.735212 | orchestrator | 2025-08-29 15:14:32 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:14:32.736460 | orchestrator | 2025-08-29 15:14:32 | INFO  | Task f7f68397-c8dd-4d7e-965c-1c18ffb76152 is in state STARTED 2025-08-29 15:14:32.738754 | orchestrator | 2025-08-29 15:14:32 | INFO  | Task caf7e1e6-bf31-4657-a700-5410817050ed is in state STARTED 2025-08-29 15:14:32.739614 | orchestrator | 2025-08-29 15:14:32 | INFO  | Task bd4efc21-f9a8-4d2c-a6e7-897975b20ff7 is in state SUCCESS 2025-08-29 15:14:32.739664 | orchestrator | 2025-08-29 15:14:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:35.784873 | orchestrator | 2025-08-29 15:14:35 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:14:35.786292 | orchestrator | 2025-08-29 15:14:35 | INFO  | Task f7f68397-c8dd-4d7e-965c-1c18ffb76152 is in state STARTED 2025-08-29 15:14:35.787579 | orchestrator | 2025-08-29 15:14:35 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:14:35.789135 | orchestrator | 2025-08-29 15:14:35 | INFO  | Task caf7e1e6-bf31-4657-a700-5410817050ed is in state STARTED 2025-08-29 15:14:35.789419 | orchestrator | 2025-08-29 15:14:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:38.837938 | orchestrator | 2025-08-29 15:14:38 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:14:38.839489 | orchestrator | 2025-08-29 15:14:38 | INFO  | Task f7f68397-c8dd-4d7e-965c-1c18ffb76152 is in state STARTED 2025-08-29 15:14:38.842218 | orchestrator | 2025-08-29 15:14:38 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:14:38.843714 | orchestrator | 2025-08-29 15:14:38 | INFO  | Task caf7e1e6-bf31-4657-a700-5410817050ed is in state STARTED 2025-08-29 15:14:38.843877 | orchestrator | 2025-08-29 15:14:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:41.891947 | orchestrator | 2025-08-29 15:14:41 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:14:41.895084 | orchestrator | 2025-08-29 15:14:41 | INFO  | Task f7f68397-c8dd-4d7e-965c-1c18ffb76152 is in state STARTED 2025-08-29 15:14:41.896826 | orchestrator | 2025-08-29 15:14:41 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:14:41.899495 | orchestrator | 2025-08-29 15:14:41 | INFO  | Task caf7e1e6-bf31-4657-a700-5410817050ed is in state STARTED 2025-08-29 15:14:41.900077 | orchestrator | 2025-08-29 15:14:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:44.946174 | orchestrator | 2025-08-29 15:14:44 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:14:44.946266 | orchestrator | 2025-08-29 15:14:44 | INFO  | Task f7f68397-c8dd-4d7e-965c-1c18ffb76152 is in state STARTED 2025-08-29 15:14:44.946274 | orchestrator | 2025-08-29 15:14:44 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:14:44.946280 | orchestrator | 2025-08-29 15:14:44 | INFO  | Task caf7e1e6-bf31-4657-a700-5410817050ed is in state STARTED 2025-08-29 15:14:44.946287 | orchestrator | 2025-08-29 15:14:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:47.986278 | orchestrator | 2025-08-29 15:14:47 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:14:47.986703 | orchestrator | 2025-08-29 15:14:47 | INFO  | Task f7f68397-c8dd-4d7e-965c-1c18ffb76152 is in state STARTED 2025-08-29 15:14:47.987909 | orchestrator | 2025-08-29 15:14:47 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:14:47.988853 | orchestrator | 2025-08-29 15:14:47 | INFO  | Task caf7e1e6-bf31-4657-a700-5410817050ed is in state STARTED 2025-08-29 15:14:47.988885 | orchestrator | 2025-08-29 15:14:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:51.041500 | orchestrator | 2025-08-29 15:14:51 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:14:51.042862 | orchestrator | 2025-08-29 15:14:51 | INFO  | Task f7f68397-c8dd-4d7e-965c-1c18ffb76152 is in state STARTED 2025-08-29 15:14:51.044538 | orchestrator | 2025-08-29 15:14:51 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:14:51.047310 | orchestrator | 2025-08-29 15:14:51 | INFO  | Task caf7e1e6-bf31-4657-a700-5410817050ed is in state STARTED 2025-08-29 15:14:51.047404 | orchestrator | 2025-08-29 15:14:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:54.080784 | orchestrator | 2025-08-29 15:14:54 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:14:54.081867 | orchestrator | 2025-08-29 15:14:54 | INFO  | Task f7f68397-c8dd-4d7e-965c-1c18ffb76152 is in state STARTED 2025-08-29 15:14:54.082995 | orchestrator | 2025-08-29 15:14:54 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:14:54.084034 | orchestrator | 2025-08-29 15:14:54 | INFO  | Task caf7e1e6-bf31-4657-a700-5410817050ed is in state STARTED 2025-08-29 15:14:54.084105 | orchestrator | 2025-08-29 15:14:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:14:57.136825 | orchestrator | 2025-08-29 15:14:57 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:14:57.138115 | orchestrator | 2025-08-29 15:14:57 | INFO  | Task f7f68397-c8dd-4d7e-965c-1c18ffb76152 is in state STARTED 2025-08-29 15:14:57.139487 | orchestrator | 2025-08-29 15:14:57 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:14:57.140692 | orchestrator | 2025-08-29 15:14:57 | INFO  | Task caf7e1e6-bf31-4657-a700-5410817050ed is in state STARTED 2025-08-29 15:14:57.140744 | orchestrator | 2025-08-29 15:14:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:00.177208 | orchestrator | 2025-08-29 15:15:00 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:15:00.178009 | orchestrator | 2025-08-29 15:15:00 | INFO  | Task f7f68397-c8dd-4d7e-965c-1c18ffb76152 is in state STARTED 2025-08-29 15:15:00.178386 | orchestrator | 2025-08-29 15:15:00 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:15:00.181046 | orchestrator | 2025-08-29 15:15:00 | INFO  | Task caf7e1e6-bf31-4657-a700-5410817050ed is in state STARTED 2025-08-29 15:15:00.181131 | orchestrator | 2025-08-29 15:15:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:03.232194 | orchestrator | 2025-08-29 15:15:03 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:15:03.233304 | orchestrator | 2025-08-29 15:15:03 | INFO  | Task f7f68397-c8dd-4d7e-965c-1c18ffb76152 is in state STARTED 2025-08-29 15:15:03.234183 | orchestrator | 2025-08-29 15:15:03 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:15:03.235368 | orchestrator | 2025-08-29 15:15:03 | INFO  | Task caf7e1e6-bf31-4657-a700-5410817050ed is in state STARTED 2025-08-29 15:15:03.235420 | orchestrator | 2025-08-29 15:15:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:06.294467 | orchestrator | 2025-08-29 15:15:06 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:15:06.294571 | orchestrator | 2025-08-29 15:15:06 | INFO  | Task f7f68397-c8dd-4d7e-965c-1c18ffb76152 is in state STARTED 2025-08-29 15:15:06.295733 | orchestrator | 2025-08-29 15:15:06 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:15:06.297694 | orchestrator | 2025-08-29 15:15:06 | INFO  | Task caf7e1e6-bf31-4657-a700-5410817050ed is in state STARTED 2025-08-29 15:15:06.297760 | orchestrator | 2025-08-29 15:15:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:09.343091 | orchestrator | 2025-08-29 15:15:09 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:15:09.346303 | orchestrator | 2025-08-29 15:15:09 | INFO  | Task f7f68397-c8dd-4d7e-965c-1c18ffb76152 is in state STARTED 2025-08-29 15:15:09.349023 | orchestrator | 2025-08-29 15:15:09 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:15:09.352398 | orchestrator | 2025-08-29 15:15:09 | INFO  | Task caf7e1e6-bf31-4657-a700-5410817050ed is in state STARTED 2025-08-29 15:15:09.352455 | orchestrator | 2025-08-29 15:15:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:12.396398 | orchestrator | 2025-08-29 15:15:12 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:15:12.398104 | orchestrator | 2025-08-29 15:15:12 | INFO  | Task f7f68397-c8dd-4d7e-965c-1c18ffb76152 is in state STARTED 2025-08-29 15:15:12.400260 | orchestrator | 2025-08-29 15:15:12 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:15:12.402202 | orchestrator | 2025-08-29 15:15:12 | INFO  | Task caf7e1e6-bf31-4657-a700-5410817050ed is in state STARTED 2025-08-29 15:15:12.402235 | orchestrator | 2025-08-29 15:15:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:15.493626 | orchestrator | 2025-08-29 15:15:15 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:15:15.497545 | orchestrator | 2025-08-29 15:15:15 | INFO  | Task f7f68397-c8dd-4d7e-965c-1c18ffb76152 is in state STARTED 2025-08-29 15:15:15.499875 | orchestrator | 2025-08-29 15:15:15 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:15:15.501330 | orchestrator | 2025-08-29 15:15:15 | INFO  | Task caf7e1e6-bf31-4657-a700-5410817050ed is in state STARTED 2025-08-29 15:15:15.501370 | orchestrator | 2025-08-29 15:15:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:18.545590 | orchestrator | 2025-08-29 15:15:18 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:15:18.546046 | orchestrator | 2025-08-29 15:15:18 | INFO  | Task f7f68397-c8dd-4d7e-965c-1c18ffb76152 is in state STARTED 2025-08-29 15:15:18.547443 | orchestrator | 2025-08-29 15:15:18 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:15:18.548747 | orchestrator | 2025-08-29 15:15:18 | INFO  | Task caf7e1e6-bf31-4657-a700-5410817050ed is in state STARTED 2025-08-29 15:15:18.548778 | orchestrator | 2025-08-29 15:15:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:21.592892 | orchestrator | 2025-08-29 15:15:21 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:15:21.594256 | orchestrator | 2025-08-29 15:15:21 | INFO  | Task f7f68397-c8dd-4d7e-965c-1c18ffb76152 is in state STARTED 2025-08-29 15:15:21.596375 | orchestrator | 2025-08-29 15:15:21 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:15:21.598445 | orchestrator | 2025-08-29 15:15:21 | INFO  | Task caf7e1e6-bf31-4657-a700-5410817050ed is in state STARTED 2025-08-29 15:15:21.598490 | orchestrator | 2025-08-29 15:15:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:24.641920 | orchestrator | 2025-08-29 15:15:24 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:15:24.644878 | orchestrator | 2025-08-29 15:15:24 | INFO  | Task f7f68397-c8dd-4d7e-965c-1c18ffb76152 is in state STARTED 2025-08-29 15:15:24.645437 | orchestrator | 2025-08-29 15:15:24 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:15:24.646486 | orchestrator | 2025-08-29 15:15:24 | INFO  | Task caf7e1e6-bf31-4657-a700-5410817050ed is in state STARTED 2025-08-29 15:15:24.646529 | orchestrator | 2025-08-29 15:15:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:27.686147 | orchestrator | 2025-08-29 15:15:27 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:15:27.689570 | orchestrator | 2025-08-29 15:15:27 | INFO  | Task f7f68397-c8dd-4d7e-965c-1c18ffb76152 is in state STARTED 2025-08-29 15:15:27.691480 | orchestrator | 2025-08-29 15:15:27 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:15:27.692529 | orchestrator | 2025-08-29 15:15:27 | INFO  | Task caf7e1e6-bf31-4657-a700-5410817050ed is in state STARTED 2025-08-29 15:15:27.692568 | orchestrator | 2025-08-29 15:15:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:30.723367 | orchestrator | 2025-08-29 15:15:30 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:15:30.724708 | orchestrator | 2025-08-29 15:15:30 | INFO  | Task f7f68397-c8dd-4d7e-965c-1c18ffb76152 is in state STARTED 2025-08-29 15:15:30.726182 | orchestrator | 2025-08-29 15:15:30 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:15:30.727088 | orchestrator | 2025-08-29 15:15:30 | INFO  | Task caf7e1e6-bf31-4657-a700-5410817050ed is in state STARTED 2025-08-29 15:15:30.727159 | orchestrator | 2025-08-29 15:15:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:33.769741 | orchestrator | 2025-08-29 15:15:33 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:15:33.771425 | orchestrator | 2025-08-29 15:15:33 | INFO  | Task f7f68397-c8dd-4d7e-965c-1c18ffb76152 is in state STARTED 2025-08-29 15:15:33.773716 | orchestrator | 2025-08-29 15:15:33 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:15:33.775654 | orchestrator | 2025-08-29 15:15:33 | INFO  | Task caf7e1e6-bf31-4657-a700-5410817050ed is in state STARTED 2025-08-29 15:15:33.775727 | orchestrator | 2025-08-29 15:15:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:36.827069 | orchestrator | 2025-08-29 15:15:36 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:15:36.834285 | orchestrator | 2025-08-29 15:15:36 | INFO  | Task f7f68397-c8dd-4d7e-965c-1c18ffb76152 is in state STARTED 2025-08-29 15:15:36.834368 | orchestrator | 2025-08-29 15:15:36 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:15:36.836358 | orchestrator | 2025-08-29 15:15:36 | INFO  | Task caf7e1e6-bf31-4657-a700-5410817050ed is in state STARTED 2025-08-29 15:15:36.836592 | orchestrator | 2025-08-29 15:15:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:39.881191 | orchestrator | 2025-08-29 15:15:39 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:15:39.883533 | orchestrator | 2025-08-29 15:15:39 | INFO  | Task f7f68397-c8dd-4d7e-965c-1c18ffb76152 is in state STARTED 2025-08-29 15:15:39.886320 | orchestrator | 2025-08-29 15:15:39 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:15:39.889259 | orchestrator | 2025-08-29 15:15:39 | INFO  | Task caf7e1e6-bf31-4657-a700-5410817050ed is in state STARTED 2025-08-29 15:15:39.889334 | orchestrator | 2025-08-29 15:15:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:42.930182 | orchestrator | 2025-08-29 15:15:42 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:15:42.931967 | orchestrator | 2025-08-29 15:15:42 | INFO  | Task f7f68397-c8dd-4d7e-965c-1c18ffb76152 is in state STARTED 2025-08-29 15:15:42.933249 | orchestrator | 2025-08-29 15:15:42 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:15:42.935067 | orchestrator | 2025-08-29 15:15:42 | INFO  | Task caf7e1e6-bf31-4657-a700-5410817050ed is in state STARTED 2025-08-29 15:15:42.935122 | orchestrator | 2025-08-29 15:15:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:45.972800 | orchestrator | 2025-08-29 15:15:45 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:15:45.973688 | orchestrator | 2025-08-29 15:15:45 | INFO  | Task f7f68397-c8dd-4d7e-965c-1c18ffb76152 is in state STARTED 2025-08-29 15:15:45.975716 | orchestrator | 2025-08-29 15:15:45 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:15:45.977328 | orchestrator | 2025-08-29 15:15:45 | INFO  | Task caf7e1e6-bf31-4657-a700-5410817050ed is in state STARTED 2025-08-29 15:15:45.977430 | orchestrator | 2025-08-29 15:15:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:49.032257 | orchestrator | 2025-08-29 15:15:49 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:15:49.035764 | orchestrator | 2025-08-29 15:15:49 | INFO  | Task f7f68397-c8dd-4d7e-965c-1c18ffb76152 is in state STARTED 2025-08-29 15:15:49.037762 | orchestrator | 2025-08-29 15:15:49 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:15:49.040923 | orchestrator | 2025-08-29 15:15:49 | INFO  | Task caf7e1e6-bf31-4657-a700-5410817050ed is in state SUCCESS 2025-08-29 15:15:49.040992 | orchestrator | 2025-08-29 15:15:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:49.042322 | orchestrator | 2025-08-29 15:15:49.042484 | orchestrator | 2025-08-29 15:15:49.042495 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:15:49.042503 | orchestrator | 2025-08-29 15:15:49.042510 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:15:49.042517 | orchestrator | Friday 29 August 2025 15:14:29 +0000 (0:00:00.171) 0:00:00.171 ********* 2025-08-29 15:15:49.042524 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:15:49.042532 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:15:49.042539 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:15:49.042546 | orchestrator | 2025-08-29 15:15:49.042552 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:15:49.042559 | orchestrator | Friday 29 August 2025 15:14:30 +0000 (0:00:00.284) 0:00:00.456 ********* 2025-08-29 15:15:49.042566 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-08-29 15:15:49.042574 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-08-29 15:15:49.042580 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-08-29 15:15:49.042587 | orchestrator | 2025-08-29 15:15:49.042593 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-08-29 15:15:49.042600 | orchestrator | 2025-08-29 15:15:49.042606 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-08-29 15:15:49.042613 | orchestrator | Friday 29 August 2025 15:14:30 +0000 (0:00:00.629) 0:00:01.085 ********* 2025-08-29 15:15:49.042620 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:15:49.042626 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:15:49.042633 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:15:49.042639 | orchestrator | 2025-08-29 15:15:49.042646 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:15:49.042653 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:15:49.042662 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:15:49.042671 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:15:49.042679 | orchestrator | 2025-08-29 15:15:49.042685 | orchestrator | 2025-08-29 15:15:49.042692 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:15:49.042722 | orchestrator | Friday 29 August 2025 15:14:31 +0000 (0:00:00.686) 0:00:01.771 ********* 2025-08-29 15:15:49.042730 | orchestrator | =============================================================================== 2025-08-29 15:15:49.042737 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 0.69s 2025-08-29 15:15:49.042743 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.63s 2025-08-29 15:15:49.042750 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.28s 2025-08-29 15:15:49.042757 | orchestrator | 2025-08-29 15:15:49.042764 | orchestrator | 2025-08-29 15:15:49.042771 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:15:49.042777 | orchestrator | 2025-08-29 15:15:49.042784 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:15:49.042790 | orchestrator | Friday 29 August 2025 15:14:00 +0000 (0:00:00.319) 0:00:00.319 ********* 2025-08-29 15:15:49.042797 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:15:49.042803 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:15:49.042809 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:15:49.042816 | orchestrator | 2025-08-29 15:15:49.042823 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:15:49.042830 | orchestrator | Friday 29 August 2025 15:14:00 +0000 (0:00:00.509) 0:00:00.829 ********* 2025-08-29 15:15:49.042836 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-08-29 15:15:49.042843 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-08-29 15:15:49.042850 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-08-29 15:15:49.042856 | orchestrator | 2025-08-29 15:15:49.042863 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-08-29 15:15:49.042870 | orchestrator | 2025-08-29 15:15:49.042876 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-08-29 15:15:49.042883 | orchestrator | Friday 29 August 2025 15:14:01 +0000 (0:00:00.568) 0:00:01.397 ********* 2025-08-29 15:15:49.042889 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:15:49.042895 | orchestrator | 2025-08-29 15:15:49.042900 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-08-29 15:15:49.042907 | orchestrator | Friday 29 August 2025 15:14:02 +0000 (0:00:00.574) 0:00:01.971 ********* 2025-08-29 15:15:49.042913 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-08-29 15:15:49.042919 | orchestrator | 2025-08-29 15:15:49.042924 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-08-29 15:15:49.042931 | orchestrator | Friday 29 August 2025 15:14:05 +0000 (0:00:03.276) 0:00:05.247 ********* 2025-08-29 15:15:49.042937 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-08-29 15:15:49.042956 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-08-29 15:15:49.042963 | orchestrator | 2025-08-29 15:15:49.042968 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-08-29 15:15:49.042973 | orchestrator | Friday 29 August 2025 15:14:11 +0000 (0:00:06.333) 0:00:11.581 ********* 2025-08-29 15:15:49.042979 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 15:15:49.042984 | orchestrator | 2025-08-29 15:15:49.042990 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-08-29 15:15:49.042995 | orchestrator | Friday 29 August 2025 15:14:15 +0000 (0:00:03.406) 0:00:14.988 ********* 2025-08-29 15:15:49.043012 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 15:15:49.043045 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-08-29 15:15:49.043052 | orchestrator | 2025-08-29 15:15:49.043058 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-08-29 15:15:49.043064 | orchestrator | Friday 29 August 2025 15:14:19 +0000 (0:00:04.000) 0:00:18.988 ********* 2025-08-29 15:15:49.043080 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 15:15:49.043087 | orchestrator | 2025-08-29 15:15:49.043093 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-08-29 15:15:49.043099 | orchestrator | Friday 29 August 2025 15:14:22 +0000 (0:00:03.422) 0:00:22.411 ********* 2025-08-29 15:15:49.043105 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-08-29 15:15:49.043122 | orchestrator | 2025-08-29 15:15:49.043129 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-08-29 15:15:49.043139 | orchestrator | Friday 29 August 2025 15:14:26 +0000 (0:00:04.414) 0:00:26.825 ********* 2025-08-29 15:15:49.043145 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:15:49.043153 | orchestrator | 2025-08-29 15:15:49.043165 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-08-29 15:15:49.043177 | orchestrator | Friday 29 August 2025 15:14:30 +0000 (0:00:03.485) 0:00:30.311 ********* 2025-08-29 15:15:49.043188 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:15:49.043200 | orchestrator | 2025-08-29 15:15:49.043212 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-08-29 15:15:49.043224 | orchestrator | Friday 29 August 2025 15:14:34 +0000 (0:00:03.892) 0:00:34.203 ********* 2025-08-29 15:15:49.043235 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:15:49.043246 | orchestrator | 2025-08-29 15:15:49.043253 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-08-29 15:15:49.043259 | orchestrator | Friday 29 August 2025 15:14:38 +0000 (0:00:03.840) 0:00:38.044 ********* 2025-08-29 15:15:49.043270 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:15:49.043281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:15:49.043294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:15:49.043315 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:49.043323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:49.043330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:49.043336 | orchestrator | 2025-08-29 15:15:49.043350 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-08-29 15:15:49.043356 | orchestrator | Friday 29 August 2025 15:14:39 +0000 (0:00:01.370) 0:00:39.414 ********* 2025-08-29 15:15:49.043361 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:15:49.043367 | orchestrator | 2025-08-29 15:15:49.043374 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-08-29 15:15:49.043380 | orchestrator | Friday 29 August 2025 15:14:39 +0000 (0:00:00.146) 0:00:39.561 ********* 2025-08-29 15:15:49.043387 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:15:49.043396 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:49.043402 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:49.043409 | orchestrator | 2025-08-29 15:15:49.043415 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-08-29 15:15:49.043422 | orchestrator | Friday 29 August 2025 15:14:40 +0000 (0:00:00.495) 0:00:40.057 ********* 2025-08-29 15:15:49.043429 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 15:15:49.043435 | orchestrator | 2025-08-29 15:15:49.043442 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-08-29 15:15:49.043448 | orchestrator | Friday 29 August 2025 15:14:41 +0000 (0:00:01.048) 0:00:41.106 ********* 2025-08-29 15:15:49.043459 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:15:49.043478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:15:49.043485 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:15:49.043492 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:49.043499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:49.043514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:49.043520 | orchestrator | 2025-08-29 15:15:49.043527 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-08-29 15:15:49.043533 | orchestrator | Friday 29 August 2025 15:14:43 +0000 (0:00:02.446) 0:00:43.552 ********* 2025-08-29 15:15:49.043540 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:15:49.043546 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:15:49.043552 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:15:49.043558 | orchestrator | 2025-08-29 15:15:49.043565 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-08-29 15:15:49.043574 | orchestrator | Friday 29 August 2025 15:14:44 +0000 (0:00:00.322) 0:00:43.874 ********* 2025-08-29 15:15:49.043581 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:15:49.043588 | orchestrator | 2025-08-29 15:15:49.043594 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-08-29 15:15:49.043600 | orchestrator | Friday 29 August 2025 15:14:44 +0000 (0:00:00.768) 0:00:44.643 ********* 2025-08-29 15:15:49.043607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:15:49.043614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:15:49.043620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:15:49.043634 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:49.043647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:49.043653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:49.043660 | orchestrator | 2025-08-29 15:15:49.043666 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-08-29 15:15:49.043673 | orchestrator | Friday 29 August 2025 15:14:47 +0000 (0:00:02.617) 0:00:47.260 ********* 2025-08-29 15:15:49.043679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 15:15:49.043686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:15:49.043697 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:15:49.043706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 15:15:49.043718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:15:49.043725 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:49.043732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 15:15:49.043739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:15:49.043752 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:49.043758 | orchestrator | 2025-08-29 15:15:49.043764 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-08-29 15:15:49.043771 | orchestrator | Friday 29 August 2025 15:14:48 +0000 (0:00:00.715) 0:00:47.975 ********* 2025-08-29 15:15:49.043778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 15:15:49.043788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:15:49.043795 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:15:49.043806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 15:15:49.043814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:15:49.043821 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:49.043827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 15:15:49.043840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:15:49.043847 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:49.043853 | orchestrator | 2025-08-29 15:15:49.043860 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-08-29 15:15:49.043867 | orchestrator | Friday 29 August 2025 15:14:49 +0000 (0:00:01.147) 0:00:49.123 ********* 2025-08-29 15:15:49.043882 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:15:49.043889 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:15:49.043896 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:15:49.043911 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:49.043918 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:49.043932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:49.043939 | orchestrator | 2025-08-29 15:15:49.043945 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-08-29 15:15:49.043952 | orchestrator | Friday 29 August 2025 15:14:51 +0000 (0:00:02.483) 0:00:51.607 ********* 2025-08-29 15:15:49.043959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:15:49.043966 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:15:49.043979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:15:49.043989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:49.044002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:49.044009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:49.044045 | orchestrator | 2025-08-29 15:15:49.044052 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-08-29 15:15:49.044058 | orchestrator | Friday 29 August 2025 15:14:57 +0000 (0:00:05.562) 0:00:57.169 ********* 2025-08-29 15:15:49.044065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 15:15:49.044073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 15:15:49.044085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:15:49.044100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:15:49.044107 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:49.044114 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:15:49.044121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-08-29 15:15:49.044134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:15:49.044141 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:49.044148 | orchestrator | 2025-08-29 15:15:49.044155 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-08-29 15:15:49.044161 | orchestrator | Friday 29 August 2025 15:14:58 +0000 (0:00:00.712) 0:00:57.881 ********* 2025-08-29 15:15:49.044167 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:15:49.044181 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:15:49.044188 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-08-29 15:15:49.044202 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:49.044209 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:49.044217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:15:49.044223 | orchestrator | 2025-08-29 15:15:49.044230 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-08-29 15:15:49.044237 | orchestrator | Friday 29 August 2025 15:15:01 +0000 (0:00:03.452) 0:01:01.334 ********* 2025-08-29 15:15:49.044244 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:15:49.044250 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:15:49.044256 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:15:49.044263 | orchestrator | 2025-08-29 15:15:49.044268 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-08-29 15:15:49.044278 | orchestrator | Friday 29 August 2025 15:15:01 +0000 (0:00:00.454) 0:01:01.789 ********* 2025-08-29 15:15:49.044285 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:15:49.044297 | orchestrator | 2025-08-29 15:15:49.044303 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-08-29 15:15:49.044310 | orchestrator | Friday 29 August 2025 15:15:04 +0000 (0:00:02.213) 0:01:04.002 ********* 2025-08-29 15:15:49.044317 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:15:49.044323 | orchestrator | 2025-08-29 15:15:49.044330 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-08-29 15:15:49.044337 | orchestrator | Friday 29 August 2025 15:15:06 +0000 (0:00:02.252) 0:01:06.255 ********* 2025-08-29 15:15:49.044349 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:15:49.044362 | orchestrator | 2025-08-29 15:15:49.044369 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-08-29 15:15:49.044375 | orchestrator | Friday 29 August 2025 15:15:22 +0000 (0:00:16.499) 0:01:22.754 ********* 2025-08-29 15:15:49.044382 | orchestrator | 2025-08-29 15:15:49.044388 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-08-29 15:15:49.044395 | orchestrator | Friday 29 August 2025 15:15:22 +0000 (0:00:00.067) 0:01:22.822 ********* 2025-08-29 15:15:49.044401 | orchestrator | 2025-08-29 15:15:49.044408 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-08-29 15:15:49.044415 | orchestrator | Friday 29 August 2025 15:15:23 +0000 (0:00:00.068) 0:01:22.891 ********* 2025-08-29 15:15:49.044421 | orchestrator | 2025-08-29 15:15:49.044428 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-08-29 15:15:49.044434 | orchestrator | Friday 29 August 2025 15:15:23 +0000 (0:00:00.068) 0:01:22.960 ********* 2025-08-29 15:15:49.044441 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:15:49.044452 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:15:49.044461 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:15:49.044468 | orchestrator | 2025-08-29 15:15:49.044478 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-08-29 15:15:49.044485 | orchestrator | Friday 29 August 2025 15:15:36 +0000 (0:00:13.291) 0:01:36.251 ********* 2025-08-29 15:15:49.044492 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:15:49.044498 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:15:49.044505 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:15:49.044512 | orchestrator | 2025-08-29 15:15:49.044519 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:15:49.044527 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-08-29 15:15:49.044536 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 15:15:49.044542 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 15:15:49.044549 | orchestrator | 2025-08-29 15:15:49.044555 | orchestrator | 2025-08-29 15:15:49.044562 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:15:49.044569 | orchestrator | Friday 29 August 2025 15:15:46 +0000 (0:00:10.410) 0:01:46.661 ********* 2025-08-29 15:15:49.044575 | orchestrator | =============================================================================== 2025-08-29 15:15:49.044581 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 16.50s 2025-08-29 15:15:49.044588 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 13.29s 2025-08-29 15:15:49.044594 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 10.41s 2025-08-29 15:15:49.044600 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.33s 2025-08-29 15:15:49.044607 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.56s 2025-08-29 15:15:49.044613 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.41s 2025-08-29 15:15:49.044620 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.00s 2025-08-29 15:15:49.044626 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.89s 2025-08-29 15:15:49.044633 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.84s 2025-08-29 15:15:49.044639 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.49s 2025-08-29 15:15:49.044646 | orchestrator | magnum : Check magnum containers ---------------------------------------- 3.45s 2025-08-29 15:15:49.044653 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.42s 2025-08-29 15:15:49.044659 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.41s 2025-08-29 15:15:49.044674 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.28s 2025-08-29 15:15:49.044681 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.62s 2025-08-29 15:15:49.044688 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.48s 2025-08-29 15:15:49.044694 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.45s 2025-08-29 15:15:49.044701 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.25s 2025-08-29 15:15:49.044707 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.21s 2025-08-29 15:15:49.044714 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.37s 2025-08-29 15:15:52.070776 | orchestrator | 2025-08-29 15:15:52 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:15:52.071216 | orchestrator | 2025-08-29 15:15:52 | INFO  | Task f7f68397-c8dd-4d7e-965c-1c18ffb76152 is in state STARTED 2025-08-29 15:15:52.071857 | orchestrator | 2025-08-29 15:15:52 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:15:52.072204 | orchestrator | 2025-08-29 15:15:52 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:55.101442 | orchestrator | 2025-08-29 15:15:55 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:15:55.101646 | orchestrator | 2025-08-29 15:15:55 | INFO  | Task f7f68397-c8dd-4d7e-965c-1c18ffb76152 is in state STARTED 2025-08-29 15:15:55.102414 | orchestrator | 2025-08-29 15:15:55 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:15:55.102572 | orchestrator | 2025-08-29 15:15:55 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:15:58.138475 | orchestrator | 2025-08-29 15:15:58 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:15:58.139807 | orchestrator | 2025-08-29 15:15:58 | INFO  | Task f7f68397-c8dd-4d7e-965c-1c18ffb76152 is in state STARTED 2025-08-29 15:15:58.141006 | orchestrator | 2025-08-29 15:15:58 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:15:58.141174 | orchestrator | 2025-08-29 15:15:58 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:01.163881 | orchestrator | 2025-08-29 15:16:01 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:16:01.164437 | orchestrator | 2025-08-29 15:16:01 | INFO  | Task f7f68397-c8dd-4d7e-965c-1c18ffb76152 is in state STARTED 2025-08-29 15:16:01.164829 | orchestrator | 2025-08-29 15:16:01 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:16:01.164849 | orchestrator | 2025-08-29 15:16:01 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:04.221160 | orchestrator | 2025-08-29 15:16:04 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:16:04.223475 | orchestrator | 2025-08-29 15:16:04 | INFO  | Task f7f68397-c8dd-4d7e-965c-1c18ffb76152 is in state STARTED 2025-08-29 15:16:04.225380 | orchestrator | 2025-08-29 15:16:04 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:16:04.225453 | orchestrator | 2025-08-29 15:16:04 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:07.281943 | orchestrator | 2025-08-29 15:16:07 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:16:07.283912 | orchestrator | 2025-08-29 15:16:07 | INFO  | Task f7f68397-c8dd-4d7e-965c-1c18ffb76152 is in state STARTED 2025-08-29 15:16:07.285199 | orchestrator | 2025-08-29 15:16:07 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:16:07.285282 | orchestrator | 2025-08-29 15:16:07 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:10.333356 | orchestrator | 2025-08-29 15:16:10 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:16:10.334678 | orchestrator | 2025-08-29 15:16:10 | INFO  | Task f7f68397-c8dd-4d7e-965c-1c18ffb76152 is in state STARTED 2025-08-29 15:16:10.337211 | orchestrator | 2025-08-29 15:16:10 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:16:10.337288 | orchestrator | 2025-08-29 15:16:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:13.379062 | orchestrator | 2025-08-29 15:16:13 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:16:13.380757 | orchestrator | 2025-08-29 15:16:13 | INFO  | Task f7f68397-c8dd-4d7e-965c-1c18ffb76152 is in state STARTED 2025-08-29 15:16:13.382866 | orchestrator | 2025-08-29 15:16:13 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:16:13.382920 | orchestrator | 2025-08-29 15:16:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:16.430836 | orchestrator | 2025-08-29 15:16:16 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:16:16.431539 | orchestrator | 2025-08-29 15:16:16 | INFO  | Task f7f68397-c8dd-4d7e-965c-1c18ffb76152 is in state STARTED 2025-08-29 15:16:16.433158 | orchestrator | 2025-08-29 15:16:16 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:16:16.433213 | orchestrator | 2025-08-29 15:16:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:19.475119 | orchestrator | 2025-08-29 15:16:19 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:16:19.478067 | orchestrator | 2025-08-29 15:16:19 | INFO  | Task f7f68397-c8dd-4d7e-965c-1c18ffb76152 is in state STARTED 2025-08-29 15:16:19.480271 | orchestrator | 2025-08-29 15:16:19 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:16:19.480307 | orchestrator | 2025-08-29 15:16:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:22.525482 | orchestrator | 2025-08-29 15:16:22 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:16:22.527569 | orchestrator | 2025-08-29 15:16:22 | INFO  | Task f7f68397-c8dd-4d7e-965c-1c18ffb76152 is in state STARTED 2025-08-29 15:16:22.529608 | orchestrator | 2025-08-29 15:16:22 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:16:22.529658 | orchestrator | 2025-08-29 15:16:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:25.570688 | orchestrator | 2025-08-29 15:16:25 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:16:25.572334 | orchestrator | 2025-08-29 15:16:25 | INFO  | Task f7f68397-c8dd-4d7e-965c-1c18ffb76152 is in state STARTED 2025-08-29 15:16:25.575501 | orchestrator | 2025-08-29 15:16:25 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:16:25.575557 | orchestrator | 2025-08-29 15:16:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:28.620090 | orchestrator | 2025-08-29 15:16:28 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:16:28.622290 | orchestrator | 2025-08-29 15:16:28 | INFO  | Task f7f68397-c8dd-4d7e-965c-1c18ffb76152 is in state STARTED 2025-08-29 15:16:28.623889 | orchestrator | 2025-08-29 15:16:28 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:16:28.623951 | orchestrator | 2025-08-29 15:16:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:31.677093 | orchestrator | 2025-08-29 15:16:31 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:16:31.680276 | orchestrator | 2025-08-29 15:16:31 | INFO  | Task f7f68397-c8dd-4d7e-965c-1c18ffb76152 is in state STARTED 2025-08-29 15:16:31.682766 | orchestrator | 2025-08-29 15:16:31 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:16:31.682911 | orchestrator | 2025-08-29 15:16:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:34.730369 | orchestrator | 2025-08-29 15:16:34 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state STARTED 2025-08-29 15:16:34.735346 | orchestrator | 2025-08-29 15:16:34 | INFO  | Task f7f68397-c8dd-4d7e-965c-1c18ffb76152 is in state STARTED 2025-08-29 15:16:34.738426 | orchestrator | 2025-08-29 15:16:34 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:16:34.738502 | orchestrator | 2025-08-29 15:16:34 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:37.789371 | orchestrator | 2025-08-29 15:16:37 | INFO  | Task f7fa0ec4-9c5f-4d98-a507-e4a56685e0c4 is in state SUCCESS 2025-08-29 15:16:37.792236 | orchestrator | 2025-08-29 15:16:37.792311 | orchestrator | 2025-08-29 15:16:37.792326 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:16:37.792340 | orchestrator | 2025-08-29 15:16:37.792352 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-08-29 15:16:37.792364 | orchestrator | Friday 29 August 2025 15:07:26 +0000 (0:00:00.312) 0:00:00.312 ********* 2025-08-29 15:16:37.792375 | orchestrator | changed: [testbed-manager] 2025-08-29 15:16:37.792388 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:16:37.792398 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:16:37.792409 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:16:37.792420 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:16:37.792432 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:16:37.792445 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:16:37.792456 | orchestrator | 2025-08-29 15:16:37.792467 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:16:37.792477 | orchestrator | Friday 29 August 2025 15:07:27 +0000 (0:00:00.925) 0:00:01.237 ********* 2025-08-29 15:16:37.792488 | orchestrator | changed: [testbed-manager] 2025-08-29 15:16:37.792500 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:16:37.792512 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:16:37.792523 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:16:37.792534 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:16:37.792548 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:16:37.792649 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:16:37.792661 | orchestrator | 2025-08-29 15:16:37.792669 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:16:37.792676 | orchestrator | Friday 29 August 2025 15:07:28 +0000 (0:00:00.860) 0:00:02.098 ********* 2025-08-29 15:16:37.792683 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-08-29 15:16:37.792690 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-08-29 15:16:37.792696 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-08-29 15:16:37.792702 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-08-29 15:16:37.792723 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-08-29 15:16:37.792730 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-08-29 15:16:37.792736 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-08-29 15:16:37.792743 | orchestrator | 2025-08-29 15:16:37.792749 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-08-29 15:16:37.792755 | orchestrator | 2025-08-29 15:16:37.792761 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-08-29 15:16:37.792788 | orchestrator | Friday 29 August 2025 15:07:29 +0000 (0:00:00.975) 0:00:03.074 ********* 2025-08-29 15:16:37.792795 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:16:37.792801 | orchestrator | 2025-08-29 15:16:37.792808 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-08-29 15:16:37.792815 | orchestrator | Friday 29 August 2025 15:07:30 +0000 (0:00:00.792) 0:00:03.866 ********* 2025-08-29 15:16:37.792823 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-08-29 15:16:37.792830 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-08-29 15:16:37.792837 | orchestrator | 2025-08-29 15:16:37.792844 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-08-29 15:16:37.792852 | orchestrator | Friday 29 August 2025 15:07:33 +0000 (0:00:03.446) 0:00:07.312 ********* 2025-08-29 15:16:37.792859 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 15:16:37.792866 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-08-29 15:16:37.792874 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:16:37.792881 | orchestrator | 2025-08-29 15:16:37.792887 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-08-29 15:16:37.792893 | orchestrator | Friday 29 August 2025 15:07:37 +0000 (0:00:03.774) 0:00:11.087 ********* 2025-08-29 15:16:37.792900 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:16:37.792906 | orchestrator | 2025-08-29 15:16:37.792912 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-08-29 15:16:37.792918 | orchestrator | Friday 29 August 2025 15:07:38 +0000 (0:00:01.164) 0:00:12.251 ********* 2025-08-29 15:16:37.792924 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:16:37.792930 | orchestrator | 2025-08-29 15:16:37.792936 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-08-29 15:16:37.792943 | orchestrator | Friday 29 August 2025 15:07:40 +0000 (0:00:02.073) 0:00:14.324 ********* 2025-08-29 15:16:37.792949 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:16:37.792955 | orchestrator | 2025-08-29 15:16:37.793019 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-08-29 15:16:37.793031 | orchestrator | Friday 29 August 2025 15:07:43 +0000 (0:00:03.387) 0:00:17.712 ********* 2025-08-29 15:16:37.793038 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:37.793049 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:37.793059 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:37.793069 | orchestrator | 2025-08-29 15:16:37.793081 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-08-29 15:16:37.793087 | orchestrator | Friday 29 August 2025 15:07:44 +0000 (0:00:00.501) 0:00:18.214 ********* 2025-08-29 15:16:37.793093 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:16:37.793100 | orchestrator | 2025-08-29 15:16:37.793106 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-08-29 15:16:37.793112 | orchestrator | Friday 29 August 2025 15:08:14 +0000 (0:00:30.180) 0:00:48.394 ********* 2025-08-29 15:16:37.793118 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:16:37.793406 | orchestrator | 2025-08-29 15:16:37.793414 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-08-29 15:16:37.793420 | orchestrator | Friday 29 August 2025 15:08:28 +0000 (0:00:13.586) 0:01:01.981 ********* 2025-08-29 15:16:37.793426 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:16:37.793432 | orchestrator | 2025-08-29 15:16:37.793439 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-08-29 15:16:37.793445 | orchestrator | Friday 29 August 2025 15:08:39 +0000 (0:00:11.654) 0:01:13.635 ********* 2025-08-29 15:16:37.793464 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:16:37.793471 | orchestrator | 2025-08-29 15:16:37.793477 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-08-29 15:16:37.793484 | orchestrator | Friday 29 August 2025 15:08:41 +0000 (0:00:01.243) 0:01:14.879 ********* 2025-08-29 15:16:37.793490 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:37.793505 | orchestrator | 2025-08-29 15:16:37.793512 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-08-29 15:16:37.793518 | orchestrator | Friday 29 August 2025 15:08:41 +0000 (0:00:00.536) 0:01:15.415 ********* 2025-08-29 15:16:37.793524 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:16:37.793531 | orchestrator | 2025-08-29 15:16:37.793537 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-08-29 15:16:37.793543 | orchestrator | Friday 29 August 2025 15:08:42 +0000 (0:00:00.587) 0:01:16.003 ********* 2025-08-29 15:16:37.793549 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:16:37.793555 | orchestrator | 2025-08-29 15:16:37.793562 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-08-29 15:16:37.793568 | orchestrator | Friday 29 August 2025 15:09:00 +0000 (0:00:18.291) 0:01:34.294 ********* 2025-08-29 15:16:37.793574 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:37.793580 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:37.793614 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:37.793621 | orchestrator | 2025-08-29 15:16:37.793627 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-08-29 15:16:37.793633 | orchestrator | 2025-08-29 15:16:37.793640 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-08-29 15:16:37.793646 | orchestrator | Friday 29 August 2025 15:09:00 +0000 (0:00:00.314) 0:01:34.609 ********* 2025-08-29 15:16:37.793652 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:16:37.793697 | orchestrator | 2025-08-29 15:16:37.793704 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-08-29 15:16:37.793717 | orchestrator | Friday 29 August 2025 15:09:01 +0000 (0:00:00.584) 0:01:35.194 ********* 2025-08-29 15:16:37.793723 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:37.793730 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:37.793736 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:16:37.793742 | orchestrator | 2025-08-29 15:16:37.793748 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-08-29 15:16:37.793755 | orchestrator | Friday 29 August 2025 15:09:03 +0000 (0:00:02.101) 0:01:37.295 ********* 2025-08-29 15:16:37.793761 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:37.793767 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:37.793774 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:16:37.793780 | orchestrator | 2025-08-29 15:16:37.793786 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-08-29 15:16:37.793793 | orchestrator | Friday 29 August 2025 15:09:05 +0000 (0:00:02.303) 0:01:39.599 ********* 2025-08-29 15:16:37.793799 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:37.793805 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:37.793811 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:37.793818 | orchestrator | 2025-08-29 15:16:37.793824 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-08-29 15:16:37.793830 | orchestrator | Friday 29 August 2025 15:09:06 +0000 (0:00:00.361) 0:01:39.960 ********* 2025-08-29 15:16:37.793836 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-08-29 15:16:37.793843 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:37.793849 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-08-29 15:16:37.793856 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:37.793862 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-08-29 15:16:37.793868 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-08-29 15:16:37.793874 | orchestrator | 2025-08-29 15:16:37.793881 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-08-29 15:16:37.793887 | orchestrator | Friday 29 August 2025 15:09:15 +0000 (0:00:08.889) 0:01:48.850 ********* 2025-08-29 15:16:37.793893 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:37.793905 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:37.793911 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:37.793917 | orchestrator | 2025-08-29 15:16:37.793924 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-08-29 15:16:37.793930 | orchestrator | Friday 29 August 2025 15:09:15 +0000 (0:00:00.638) 0:01:49.489 ********* 2025-08-29 15:16:37.793937 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-08-29 15:16:37.793943 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:37.793949 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-08-29 15:16:37.793955 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:37.793978 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-08-29 15:16:37.793985 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:37.793991 | orchestrator | 2025-08-29 15:16:37.793997 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-08-29 15:16:37.794003 | orchestrator | Friday 29 August 2025 15:09:16 +0000 (0:00:01.136) 0:01:50.626 ********* 2025-08-29 15:16:37.794009 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:37.794053 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:37.794063 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:16:37.794070 | orchestrator | 2025-08-29 15:16:37.794077 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-08-29 15:16:37.794084 | orchestrator | Friday 29 August 2025 15:09:17 +0000 (0:00:00.626) 0:01:51.253 ********* 2025-08-29 15:16:37.794091 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:37.794098 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:37.794104 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:16:37.794111 | orchestrator | 2025-08-29 15:16:37.794118 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-08-29 15:16:37.794125 | orchestrator | Friday 29 August 2025 15:09:18 +0000 (0:00:01.012) 0:01:52.265 ********* 2025-08-29 15:16:37.794132 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:37.794139 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:37.794155 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:16:37.794162 | orchestrator | 2025-08-29 15:16:37.794169 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-08-29 15:16:37.794176 | orchestrator | Friday 29 August 2025 15:09:20 +0000 (0:00:02.192) 0:01:54.458 ********* 2025-08-29 15:16:37.794183 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:37.794190 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:37.794197 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:16:37.794204 | orchestrator | 2025-08-29 15:16:37.794211 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-08-29 15:16:37.794217 | orchestrator | Friday 29 August 2025 15:09:42 +0000 (0:00:21.982) 0:02:16.441 ********* 2025-08-29 15:16:37.794224 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:37.794231 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:37.794238 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:16:37.794245 | orchestrator | 2025-08-29 15:16:37.794251 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-08-29 15:16:37.794258 | orchestrator | Friday 29 August 2025 15:09:56 +0000 (0:00:13.376) 0:02:29.817 ********* 2025-08-29 15:16:37.794265 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:16:37.794271 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:37.794278 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:37.794285 | orchestrator | 2025-08-29 15:16:37.794291 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-08-29 15:16:37.794298 | orchestrator | Friday 29 August 2025 15:09:57 +0000 (0:00:01.347) 0:02:31.165 ********* 2025-08-29 15:16:37.794305 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:37.794312 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:37.794319 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:16:37.794325 | orchestrator | 2025-08-29 15:16:37.794332 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-08-29 15:16:37.794345 | orchestrator | Friday 29 August 2025 15:10:10 +0000 (0:00:12.622) 0:02:43.787 ********* 2025-08-29 15:16:37.794352 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:37.794359 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:37.794365 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:37.794372 | orchestrator | 2025-08-29 15:16:37.794383 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-08-29 15:16:37.794389 | orchestrator | Friday 29 August 2025 15:10:11 +0000 (0:00:01.288) 0:02:45.075 ********* 2025-08-29 15:16:37.794395 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:37.794402 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:37.794408 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:37.794414 | orchestrator | 2025-08-29 15:16:37.794420 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-08-29 15:16:37.794426 | orchestrator | 2025-08-29 15:16:37.794432 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-08-29 15:16:37.794439 | orchestrator | Friday 29 August 2025 15:10:11 +0000 (0:00:00.532) 0:02:45.608 ********* 2025-08-29 15:16:37.794445 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:16:37.794452 | orchestrator | 2025-08-29 15:16:37.794459 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-08-29 15:16:37.794465 | orchestrator | Friday 29 August 2025 15:10:12 +0000 (0:00:00.594) 0:02:46.202 ********* 2025-08-29 15:16:37.794471 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-08-29 15:16:37.794478 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-08-29 15:16:37.794484 | orchestrator | 2025-08-29 15:16:37.794490 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-08-29 15:16:37.794496 | orchestrator | Friday 29 August 2025 15:10:15 +0000 (0:00:03.274) 0:02:49.477 ********* 2025-08-29 15:16:37.794503 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-08-29 15:16:37.794511 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-08-29 15:16:37.794517 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-08-29 15:16:37.794523 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-08-29 15:16:37.794529 | orchestrator | 2025-08-29 15:16:37.794536 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-08-29 15:16:37.794542 | orchestrator | Friday 29 August 2025 15:10:22 +0000 (0:00:06.589) 0:02:56.066 ********* 2025-08-29 15:16:37.794548 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 15:16:37.794554 | orchestrator | 2025-08-29 15:16:37.794560 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-08-29 15:16:37.794566 | orchestrator | Friday 29 August 2025 15:10:25 +0000 (0:00:03.583) 0:02:59.650 ********* 2025-08-29 15:16:37.794573 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 15:16:37.794579 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-08-29 15:16:37.794585 | orchestrator | 2025-08-29 15:16:37.794591 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-08-29 15:16:37.794597 | orchestrator | Friday 29 August 2025 15:10:30 +0000 (0:00:04.385) 0:03:04.036 ********* 2025-08-29 15:16:37.794604 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 15:16:37.794610 | orchestrator | 2025-08-29 15:16:37.794616 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-08-29 15:16:37.794622 | orchestrator | Friday 29 August 2025 15:10:34 +0000 (0:00:03.772) 0:03:07.808 ********* 2025-08-29 15:16:37.794628 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-08-29 15:16:37.794634 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-08-29 15:16:37.794645 | orchestrator | 2025-08-29 15:16:37.794651 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-08-29 15:16:37.794662 | orchestrator | Friday 29 August 2025 15:10:41 +0000 (0:00:07.646) 0:03:15.455 ********* 2025-08-29 15:16:37.794673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 15:16:37.794686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 15:16:37.794695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:37.794710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 15:16:37.794724 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:37.794731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:37.794738 | orchestrator | 2025-08-29 15:16:37.794748 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-08-29 15:16:37.794754 | orchestrator | Friday 29 August 2025 15:10:43 +0000 (0:00:01.611) 0:03:17.066 ********* 2025-08-29 15:16:37.794760 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:37.794767 | orchestrator | 2025-08-29 15:16:37.794773 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-08-29 15:16:37.794779 | orchestrator | Friday 29 August 2025 15:10:43 +0000 (0:00:00.123) 0:03:17.190 ********* 2025-08-29 15:16:37.794785 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:37.794791 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:37.794797 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:37.794804 | orchestrator | 2025-08-29 15:16:37.794810 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-08-29 15:16:37.794816 | orchestrator | Friday 29 August 2025 15:10:43 +0000 (0:00:00.274) 0:03:17.466 ********* 2025-08-29 15:16:37.794822 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 15:16:37.794828 | orchestrator | 2025-08-29 15:16:37.794834 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-08-29 15:16:37.794841 | orchestrator | Friday 29 August 2025 15:10:44 +0000 (0:00:01.040) 0:03:18.506 ********* 2025-08-29 15:16:37.794847 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:37.794853 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:37.794859 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:37.794865 | orchestrator | 2025-08-29 15:16:37.794871 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-08-29 15:16:37.794878 | orchestrator | Friday 29 August 2025 15:10:45 +0000 (0:00:00.597) 0:03:19.104 ********* 2025-08-29 15:16:37.794884 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:16:37.794890 | orchestrator | 2025-08-29 15:16:37.794896 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-08-29 15:16:37.794902 | orchestrator | Friday 29 August 2025 15:10:46 +0000 (0:00:00.699) 0:03:19.804 ********* 2025-08-29 15:16:37.794909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 15:16:37.794926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 15:16:37.794937 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 15:16:37.794945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:37.794977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:37.794994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:37.795006 | orchestrator | 2025-08-29 15:16:37.795020 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-08-29 15:16:37.795035 | orchestrator | Friday 29 August 2025 15:10:48 +0000 (0:00:02.464) 0:03:22.268 ********* 2025-08-29 15:16:37.795051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 15:16:37.795063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:16:37.795073 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:37.795084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 15:16:37.795103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:16:37.795113 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:37.795132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 15:16:37.795148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:16:37.795161 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:37.795167 | orchestrator | 2025-08-29 15:16:37.795173 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-08-29 15:16:37.795180 | orchestrator | Friday 29 August 2025 15:10:49 +0000 (0:00:00.857) 0:03:23.125 ********* 2025-08-29 15:16:37.795186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 15:16:37.795199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:16:37.795205 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:37.795219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 15:16:37.795227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:16:37.795236 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:37.795243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 15:16:37.795255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:16:37.795261 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:37.795268 | orchestrator | 2025-08-29 15:16:37.795274 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-08-29 15:16:37.795280 | orchestrator | Friday 29 August 2025 15:10:51 +0000 (0:00:01.731) 0:03:24.857 ********* 2025-08-29 15:16:37.795292 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 15:16:37.795303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 15:16:37.795311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 15:16:37.795323 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:37.795335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:37.795342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:37.795349 | orchestrator | 2025-08-29 15:16:37.795355 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-08-29 15:16:37.795361 | orchestrator | Friday 29 August 2025 15:10:53 +0000 (0:00:02.841) 0:03:27.699 ********* 2025-08-29 15:16:37.795784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 15:16:37.795810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 15:16:37.795837 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 15:16:37.795845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:37.795858 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:37.795870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:37.795876 | orchestrator | 2025-08-29 15:16:37.795883 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-08-29 15:16:37.795889 | orchestrator | Friday 29 August 2025 15:11:04 +0000 (0:00:10.642) 0:03:38.342 ********* 2025-08-29 15:16:37.795896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 15:16:37.795918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:16:37.795925 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:37.795932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 15:16:37.795946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:16:37.795953 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:37.795984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-08-29 15:16:37.795991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-08-29 15:16:37.795998 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:37.796004 | orchestrator | 2025-08-29 15:16:37.796010 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-08-29 15:16:37.796016 | orchestrator | Friday 29 August 2025 15:11:05 +0000 (0:00:00.940) 0:03:39.283 ********* 2025-08-29 15:16:37.796023 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:16:37.796029 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:16:37.796035 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:16:37.796041 | orchestrator | 2025-08-29 15:16:37.796064 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-08-29 15:16:37.796071 | orchestrator | Friday 29 August 2025 15:11:07 +0000 (0:00:02.317) 0:03:41.600 ********* 2025-08-29 15:16:37.796077 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:37.796083 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:37.796089 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:37.796095 | orchestrator | 2025-08-29 15:16:37.796132 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-08-29 15:16:37.796139 | orchestrator | Friday 29 August 2025 15:11:08 +0000 (0:00:00.381) 0:03:41.982 ********* 2025-08-29 15:16:37.796152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 15:16:37.796165 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:37.796172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 15:16:37.796195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-08-29 15:16:37.796208 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:37.796218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:37.796225 | orchestrator | 2025-08-29 15:16:37.796231 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-08-29 15:16:37.796237 | orchestrator | Friday 29 August 2025 15:11:11 +0000 (0:00:03.266) 0:03:45.248 ********* 2025-08-29 15:16:37.796244 | orchestrator | 2025-08-29 15:16:37.796250 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-08-29 15:16:37.796256 | orchestrator | Friday 29 August 2025 15:11:11 +0000 (0:00:00.287) 0:03:45.536 ********* 2025-08-29 15:16:37.796262 | orchestrator | 2025-08-29 15:16:37.796268 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-08-29 15:16:37.796274 | orchestrator | Friday 29 August 2025 15:11:11 +0000 (0:00:00.223) 0:03:45.760 ********* 2025-08-29 15:16:37.796280 | orchestrator | 2025-08-29 15:16:37.796287 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-08-29 15:16:37.796293 | orchestrator | Friday 29 August 2025 15:11:12 +0000 (0:00:00.141) 0:03:45.901 ********* 2025-08-29 15:16:37.796299 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:16:37.796305 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:16:37.796311 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:16:37.796317 | orchestrator | 2025-08-29 15:16:37.796323 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-08-29 15:16:37.796329 | orchestrator | Friday 29 August 2025 15:11:32 +0000 (0:00:20.684) 0:04:06.586 ********* 2025-08-29 15:16:37.796335 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:16:37.796341 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:16:37.796348 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:16:37.796354 | orchestrator | 2025-08-29 15:16:37.796360 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-08-29 15:16:37.796366 | orchestrator | 2025-08-29 15:16:37.796372 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-08-29 15:16:37.796378 | orchestrator | Friday 29 August 2025 15:11:42 +0000 (0:00:09.592) 0:04:16.178 ********* 2025-08-29 15:16:37.796385 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:16:37.796392 | orchestrator | 2025-08-29 15:16:37.796399 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-08-29 15:16:37.796406 | orchestrator | Friday 29 August 2025 15:11:43 +0000 (0:00:01.047) 0:04:17.225 ********* 2025-08-29 15:16:37.796412 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:16:37.796419 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:16:37.796426 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:16:37.796433 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:37.796439 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:37.796446 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:37.796453 | orchestrator | 2025-08-29 15:16:37.796468 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-08-29 15:16:37.796785 | orchestrator | Friday 29 August 2025 15:11:44 +0000 (0:00:00.710) 0:04:17.935 ********* 2025-08-29 15:16:37.796792 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:37.796798 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:37.796804 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:37.796810 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:16:37.796817 | orchestrator | 2025-08-29 15:16:37.796823 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-08-29 15:16:37.796850 | orchestrator | Friday 29 August 2025 15:11:45 +0000 (0:00:01.336) 0:04:19.272 ********* 2025-08-29 15:16:37.796857 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-08-29 15:16:37.796864 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-08-29 15:16:37.796870 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-08-29 15:16:37.796876 | orchestrator | 2025-08-29 15:16:37.796883 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-08-29 15:16:37.796889 | orchestrator | Friday 29 August 2025 15:11:46 +0000 (0:00:00.837) 0:04:20.109 ********* 2025-08-29 15:16:37.796895 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-08-29 15:16:37.796901 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-08-29 15:16:37.796907 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-08-29 15:16:37.796913 | orchestrator | 2025-08-29 15:16:37.796919 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-08-29 15:16:37.796925 | orchestrator | Friday 29 August 2025 15:11:48 +0000 (0:00:01.782) 0:04:21.892 ********* 2025-08-29 15:16:37.796932 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-08-29 15:16:37.796938 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:16:37.796944 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-08-29 15:16:37.796950 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:16:37.796956 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-08-29 15:16:37.796980 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:16:37.796987 | orchestrator | 2025-08-29 15:16:37.796993 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-08-29 15:16:37.796999 | orchestrator | Friday 29 August 2025 15:11:49 +0000 (0:00:01.502) 0:04:23.395 ********* 2025-08-29 15:16:37.797005 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-08-29 15:16:37.797011 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-08-29 15:16:37.797022 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 15:16:37.797029 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 15:16:37.797035 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:37.797041 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 15:16:37.797047 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-08-29 15:16:37.797053 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 15:16:37.797059 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-08-29 15:16:37.797065 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:37.797072 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-08-29 15:16:37.797078 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-08-29 15:16:37.797084 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:37.797090 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-08-29 15:16:37.797096 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-08-29 15:16:37.797108 | orchestrator | 2025-08-29 15:16:37.797114 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-08-29 15:16:37.797120 | orchestrator | Friday 29 August 2025 15:11:51 +0000 (0:00:01.796) 0:04:25.192 ********* 2025-08-29 15:16:37.797126 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:37.797132 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:37.797139 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:37.797145 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:16:37.797151 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:16:37.797157 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:16:37.797163 | orchestrator | 2025-08-29 15:16:37.797169 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-08-29 15:16:37.797175 | orchestrator | Friday 29 August 2025 15:11:52 +0000 (0:00:01.468) 0:04:26.660 ********* 2025-08-29 15:16:37.797181 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:37.797187 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:37.797194 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:37.797200 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:16:37.797206 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:16:37.797212 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:16:37.797218 | orchestrator | 2025-08-29 15:16:37.797224 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-08-29 15:16:37.797230 | orchestrator | Friday 29 August 2025 15:11:55 +0000 (0:00:02.270) 0:04:28.930 ********* 2025-08-29 15:16:37.797237 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 15:16:37.797265 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 15:16:37.797276 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 15:16:37.797288 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 15:16:37.797295 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 15:16:37.797302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 15:16:37.797309 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 15:16:37.797333 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:37.797341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 15:16:37.797350 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 15:16:37.797362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:37.797368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:37.797375 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:37.797400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:37.797408 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:37.797414 | orchestrator | 2025-08-29 15:16:37.797421 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-08-29 15:16:37.797432 | orchestrator | Friday 29 August 2025 15:11:59 +0000 (0:00:04.380) 0:04:33.311 ********* 2025-08-29 15:16:37.797443 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:16:37.797450 | orchestrator | 2025-08-29 15:16:37.797457 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-08-29 15:16:37.797464 | orchestrator | Friday 29 August 2025 15:12:01 +0000 (0:00:01.677) 0:04:34.988 ********* 2025-08-29 15:16:37.797471 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 15:16:37.797479 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 15:16:37.797504 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 15:16:37.797512 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 15:16:37.797521 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 15:16:37.797536 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 15:16:37.797543 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 15:16:37.797551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 15:16:37.797558 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 15:16:37.797583 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:37.797592 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:37.797608 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:37.797615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:37.797622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:37.797628 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:37.797635 | orchestrator | 2025-08-29 15:16:37.797641 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-08-29 15:16:37.797647 | orchestrator | Friday 29 August 2025 15:12:06 +0000 (0:00:05.710) 0:04:40.699 ********* 2025-08-29 15:16:37.797672 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 15:16:37.797686 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 15:16:37.797696 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 15:16:37.797702 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:16:37.797709 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 15:16:37.797716 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 15:16:37.797740 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 15:16:37.797748 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:16:37.797765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 15:16:37.797780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:16:37.797789 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:37.797798 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 15:16:37.797807 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 15:16:37.797816 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 15:16:37.797824 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:16:37.797864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 15:16:37.797888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:16:37.797898 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:37.797912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 15:16:37.797922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:16:37.797931 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:37.797941 | orchestrator | 2025-08-29 15:16:37.797951 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-08-29 15:16:37.798037 | orchestrator | Friday 29 August 2025 15:12:10 +0000 (0:00:03.307) 0:04:44.007 ********* 2025-08-29 15:16:37.798053 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 15:16:37.798062 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 15:16:37.798099 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 15:16:37.798114 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:16:37.798121 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 15:16:37.798131 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 15:16:37.798138 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 15:16:37.798144 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:16:37.798151 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 15:16:37.798181 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 15:16:37.798189 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 15:16:37.798196 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:16:37.798216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 15:16:37.798223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:16:37.798230 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:37.798236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 15:16:37.798243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:16:37.798249 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:37.798262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 15:16:37.798286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:16:37.798293 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:37.798299 | orchestrator | 2025-08-29 15:16:37.798306 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-08-29 15:16:37.798312 | orchestrator | Friday 29 August 2025 15:12:14 +0000 (0:00:03.828) 0:04:47.836 ********* 2025-08-29 15:16:37.798318 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:37.798324 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:37.798330 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:37.798336 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-08-29 15:16:37.798343 | orchestrator | 2025-08-29 15:16:37.798349 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-08-29 15:16:37.798355 | orchestrator | Friday 29 August 2025 15:12:15 +0000 (0:00:01.098) 0:04:48.934 ********* 2025-08-29 15:16:37.798361 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-08-29 15:16:37.798368 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-08-29 15:16:37.798374 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-08-29 15:16:37.798380 | orchestrator | 2025-08-29 15:16:37.798386 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-08-29 15:16:37.798392 | orchestrator | Friday 29 August 2025 15:12:16 +0000 (0:00:01.409) 0:04:50.343 ********* 2025-08-29 15:16:37.798402 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-08-29 15:16:37.798408 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-08-29 15:16:37.798414 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-08-29 15:16:37.798420 | orchestrator | 2025-08-29 15:16:37.798426 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-08-29 15:16:37.798433 | orchestrator | Friday 29 August 2025 15:12:18 +0000 (0:00:01.844) 0:04:52.188 ********* 2025-08-29 15:16:37.798439 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:16:37.798445 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:16:37.798451 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:16:37.798457 | orchestrator | 2025-08-29 15:16:37.798463 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-08-29 15:16:37.798469 | orchestrator | Friday 29 August 2025 15:12:18 +0000 (0:00:00.478) 0:04:52.666 ********* 2025-08-29 15:16:37.798476 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:16:37.798482 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:16:37.798488 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:16:37.798494 | orchestrator | 2025-08-29 15:16:37.798500 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-08-29 15:16:37.798506 | orchestrator | Friday 29 August 2025 15:12:19 +0000 (0:00:00.919) 0:04:53.585 ********* 2025-08-29 15:16:37.798512 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-08-29 15:16:37.798519 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-08-29 15:16:37.798529 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-08-29 15:16:37.798535 | orchestrator | 2025-08-29 15:16:37.798541 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-08-29 15:16:37.798547 | orchestrator | Friday 29 August 2025 15:12:21 +0000 (0:00:01.520) 0:04:55.106 ********* 2025-08-29 15:16:37.798552 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-08-29 15:16:37.798558 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-08-29 15:16:37.798563 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-08-29 15:16:37.798568 | orchestrator | 2025-08-29 15:16:37.798574 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-08-29 15:16:37.798579 | orchestrator | Friday 29 August 2025 15:12:22 +0000 (0:00:01.388) 0:04:56.494 ********* 2025-08-29 15:16:37.798584 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-08-29 15:16:37.798590 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-08-29 15:16:37.798595 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-08-29 15:16:37.798600 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-08-29 15:16:37.798606 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-08-29 15:16:37.798611 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-08-29 15:16:37.798616 | orchestrator | 2025-08-29 15:16:37.798622 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-08-29 15:16:37.798627 | orchestrator | Friday 29 August 2025 15:12:27 +0000 (0:00:04.858) 0:05:01.353 ********* 2025-08-29 15:16:37.798632 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:16:37.798638 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:16:37.798643 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:16:37.798648 | orchestrator | 2025-08-29 15:16:37.798654 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-08-29 15:16:37.798659 | orchestrator | Friday 29 August 2025 15:12:28 +0000 (0:00:00.531) 0:05:01.884 ********* 2025-08-29 15:16:37.798665 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:16:37.798670 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:16:37.798675 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:16:37.798681 | orchestrator | 2025-08-29 15:16:37.798686 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-08-29 15:16:37.798691 | orchestrator | Friday 29 August 2025 15:12:28 +0000 (0:00:00.332) 0:05:02.216 ********* 2025-08-29 15:16:37.798700 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:16:37.798709 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:16:37.798723 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:16:37.798734 | orchestrator | 2025-08-29 15:16:37.798769 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-08-29 15:16:37.798779 | orchestrator | Friday 29 August 2025 15:12:29 +0000 (0:00:01.392) 0:05:03.609 ********* 2025-08-29 15:16:37.798788 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-08-29 15:16:37.798798 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-08-29 15:16:37.798807 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-08-29 15:16:37.798816 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-08-29 15:16:37.798825 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-08-29 15:16:37.798831 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-08-29 15:16:37.798845 | orchestrator | 2025-08-29 15:16:37.798851 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-08-29 15:16:37.798856 | orchestrator | Friday 29 August 2025 15:12:33 +0000 (0:00:03.720) 0:05:07.330 ********* 2025-08-29 15:16:37.798862 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-08-29 15:16:37.798867 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-08-29 15:16:37.798873 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-08-29 15:16:37.798878 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-08-29 15:16:37.798888 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:16:37.798893 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-08-29 15:16:37.798899 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:16:37.798904 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-08-29 15:16:37.798909 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:16:37.798915 | orchestrator | 2025-08-29 15:16:37.798920 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-08-29 15:16:37.798925 | orchestrator | Friday 29 August 2025 15:12:37 +0000 (0:00:04.426) 0:05:11.756 ********* 2025-08-29 15:16:37.798931 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:16:37.798936 | orchestrator | 2025-08-29 15:16:37.798942 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-08-29 15:16:37.798947 | orchestrator | Friday 29 August 2025 15:12:38 +0000 (0:00:00.143) 0:05:11.900 ********* 2025-08-29 15:16:37.798952 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:16:37.798975 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:16:37.798985 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:16:37.798992 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:37.798998 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:37.799003 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:37.799008 | orchestrator | 2025-08-29 15:16:37.799014 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-08-29 15:16:37.799019 | orchestrator | Friday 29 August 2025 15:12:38 +0000 (0:00:00.627) 0:05:12.527 ********* 2025-08-29 15:16:37.799024 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-08-29 15:16:37.799030 | orchestrator | 2025-08-29 15:16:37.799035 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-08-29 15:16:37.799040 | orchestrator | Friday 29 August 2025 15:12:39 +0000 (0:00:00.802) 0:05:13.330 ********* 2025-08-29 15:16:37.799046 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:16:37.799051 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:16:37.799056 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:16:37.799061 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:37.799067 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:37.799072 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:37.799077 | orchestrator | 2025-08-29 15:16:37.799083 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-08-29 15:16:37.799088 | orchestrator | Friday 29 August 2025 15:12:40 +0000 (0:00:00.842) 0:05:14.173 ********* 2025-08-29 15:16:37.799094 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 15:16:37.799106 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 15:16:37.799117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 15:16:37.799126 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 15:16:37.799132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 15:16:37.799138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 15:16:37.799144 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 15:16:37.799160 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 15:16:37.799166 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:37.799175 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:37.799181 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 15:16:37.799187 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:37.799192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:37.799202 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:37.799212 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:37.799217 | orchestrator | 2025-08-29 15:16:37.799223 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-08-29 15:16:37.799229 | orchestrator | Friday 29 August 2025 15:12:44 +0000 (0:00:04.191) 0:05:18.365 ********* 2025-08-29 15:16:37.799238 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 15:16:37.799244 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 15:16:37.799249 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 15:16:37.799259 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 15:16:37.799268 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 15:16:37.799274 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 15:16:37.799280 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:37.799286 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:37.799292 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 15:16:37.799331 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:37.799338 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 15:16:37.799346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 15:16:37.799352 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:37.799358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:37.799363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:37.799373 | orchestrator | 2025-08-29 15:16:37.799378 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-08-29 15:16:37.799384 | orchestrator | Friday 29 August 2025 15:12:52 +0000 (0:00:08.052) 0:05:26.417 ********* 2025-08-29 15:16:37.799389 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:16:37.799394 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:16:37.799400 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:37.799405 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:16:37.799411 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:37.799416 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:37.799421 | orchestrator | 2025-08-29 15:16:37.799427 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-08-29 15:16:37.799432 | orchestrator | Friday 29 August 2025 15:12:54 +0000 (0:00:01.476) 0:05:27.894 ********* 2025-08-29 15:16:37.799437 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-08-29 15:16:37.799443 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-08-29 15:16:37.799448 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-08-29 15:16:37.799454 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-08-29 15:16:37.799462 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-08-29 15:16:37.799468 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:37.799473 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-08-29 15:16:37.799478 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:37.799484 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-08-29 15:16:37.799489 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-08-29 15:16:37.799495 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:37.799500 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-08-29 15:16:37.799505 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-08-29 15:16:37.799511 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-08-29 15:16:37.799516 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-08-29 15:16:37.799522 | orchestrator | 2025-08-29 15:16:37.799527 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-08-29 15:16:37.799532 | orchestrator | Friday 29 August 2025 15:12:58 +0000 (0:00:04.806) 0:05:32.700 ********* 2025-08-29 15:16:37.799538 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:16:37.799543 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:16:37.799548 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:16:37.799554 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:37.799559 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:37.799564 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:37.799570 | orchestrator | 2025-08-29 15:16:37.799575 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-08-29 15:16:37.799583 | orchestrator | Friday 29 August 2025 15:12:59 +0000 (0:00:00.524) 0:05:33.224 ********* 2025-08-29 15:16:37.799589 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-08-29 15:16:37.799594 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-08-29 15:16:37.799600 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-08-29 15:16:37.799609 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-08-29 15:16:37.799615 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-08-29 15:16:37.799620 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-08-29 15:16:37.799625 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-08-29 15:16:37.799631 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-08-29 15:16:37.799636 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-08-29 15:16:37.799641 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-08-29 15:16:37.799647 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:37.799652 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-08-29 15:16:37.799658 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:37.799663 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-08-29 15:16:37.799668 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:37.799674 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-08-29 15:16:37.799679 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-08-29 15:16:37.799684 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-08-29 15:16:37.799690 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-08-29 15:16:37.799695 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-08-29 15:16:37.799700 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-08-29 15:16:37.799706 | orchestrator | 2025-08-29 15:16:37.799711 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-08-29 15:16:37.799717 | orchestrator | Friday 29 August 2025 15:13:06 +0000 (0:00:06.654) 0:05:39.879 ********* 2025-08-29 15:16:37.799722 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-08-29 15:16:37.799728 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-08-29 15:16:37.799736 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-08-29 15:16:37.799741 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-08-29 15:16:37.799747 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-08-29 15:16:37.799752 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-08-29 15:16:37.799757 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-08-29 15:16:37.799763 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-08-29 15:16:37.799768 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-08-29 15:16:37.799773 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-08-29 15:16:37.799778 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-08-29 15:16:37.799790 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-08-29 15:16:37.799795 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-08-29 15:16:37.799800 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-08-29 15:16:37.799806 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:37.799811 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-08-29 15:16:37.799817 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-08-29 15:16:37.799822 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-08-29 15:16:37.799830 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:37.799835 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-08-29 15:16:37.799841 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:37.799846 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-08-29 15:16:37.799852 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-08-29 15:16:37.799857 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-08-29 15:16:37.799862 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-08-29 15:16:37.799868 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-08-29 15:16:37.799873 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-08-29 15:16:37.799878 | orchestrator | 2025-08-29 15:16:37.799884 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-08-29 15:16:37.799889 | orchestrator | Friday 29 August 2025 15:13:15 +0000 (0:00:09.481) 0:05:49.361 ********* 2025-08-29 15:16:37.799894 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:16:37.799900 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:16:37.799905 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:16:37.799910 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:37.799916 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:37.799921 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:37.799927 | orchestrator | 2025-08-29 15:16:37.799932 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-08-29 15:16:37.799937 | orchestrator | Friday 29 August 2025 15:13:16 +0000 (0:00:00.652) 0:05:50.013 ********* 2025-08-29 15:16:37.799943 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:16:37.799948 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:16:37.799953 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:16:37.799997 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:37.800003 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:37.800009 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:37.800014 | orchestrator | 2025-08-29 15:16:37.800019 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-08-29 15:16:37.800025 | orchestrator | Friday 29 August 2025 15:13:16 +0000 (0:00:00.543) 0:05:50.557 ********* 2025-08-29 15:16:37.800030 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:37.800035 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:37.800041 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:16:37.800046 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:37.800051 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:16:37.800057 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:16:37.800062 | orchestrator | 2025-08-29 15:16:37.800067 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-08-29 15:16:37.800073 | orchestrator | Friday 29 August 2025 15:13:19 +0000 (0:00:02.243) 0:05:52.800 ********* 2025-08-29 15:16:37.800082 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 15:16:37.800093 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 15:16:37.800102 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 15:16:37.800108 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 15:16:37.800114 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:16:37.800119 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 15:16:37.800125 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 15:16:37.800134 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:16:37.800144 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-08-29 15:16:37.800150 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-08-29 15:16:37.800159 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-08-29 15:16:37.800164 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:16:37.800170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 15:16:37.800176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:16:37.800186 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:37.800191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 15:16:37.800201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:16:37.800207 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:37.800212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-08-29 15:16:37.800221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-08-29 15:16:37.800227 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:37.800232 | orchestrator | 2025-08-29 15:16:37.800238 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-08-29 15:16:37.800243 | orchestrator | Friday 29 August 2025 15:13:20 +0000 (0:00:01.531) 0:05:54.332 ********* 2025-08-29 15:16:37.800249 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-08-29 15:16:37.800254 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-08-29 15:16:37.800260 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:16:37.800265 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-08-29 15:16:37.800271 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-08-29 15:16:37.800276 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:16:37.800281 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-08-29 15:16:37.800287 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-08-29 15:16:37.800292 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:16:37.800298 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-08-29 15:16:37.800303 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-08-29 15:16:37.800308 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:37.800314 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-08-29 15:16:37.800323 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-08-29 15:16:37.800328 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:37.800334 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-08-29 15:16:37.800339 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-08-29 15:16:37.800344 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:37.800350 | orchestrator | 2025-08-29 15:16:37.800355 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-08-29 15:16:37.800361 | orchestrator | Friday 29 August 2025 15:13:21 +0000 (0:00:00.927) 0:05:55.260 ********* 2025-08-29 15:16:37.800366 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 15:16:37.800376 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 15:16:37.800382 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 15:16:37.800391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 15:16:37.800397 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-08-29 15:16:37.800408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-08-29 15:16:37.800413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:37.800422 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 15:16:37.800428 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 15:16:37.800437 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-08-29 15:16:37.800442 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:37.800452 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:37.800457 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:37.800465 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:37.800470 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-08-29 15:16:37.800475 | orchestrator | 2025-08-29 15:16:37.800480 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-08-29 15:16:37.800485 | orchestrator | Friday 29 August 2025 15:13:24 +0000 (0:00:03.512) 0:05:58.772 ********* 2025-08-29 15:16:37.800490 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:16:37.800495 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:16:37.800499 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:16:37.800504 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:37.800509 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:37.800514 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:37.800519 | orchestrator | 2025-08-29 15:16:37.800526 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-08-29 15:16:37.800531 | orchestrator | Friday 29 August 2025 15:13:25 +0000 (0:00:00.821) 0:05:59.593 ********* 2025-08-29 15:16:37.800541 | orchestrator | 2025-08-29 15:16:37.800546 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-08-29 15:16:37.800550 | orchestrator | Friday 29 August 2025 15:13:25 +0000 (0:00:00.140) 0:05:59.734 ********* 2025-08-29 15:16:37.800555 | orchestrator | 2025-08-29 15:16:37.800560 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-08-29 15:16:37.800565 | orchestrator | Friday 29 August 2025 15:13:26 +0000 (0:00:00.145) 0:05:59.880 ********* 2025-08-29 15:16:37.800570 | orchestrator | 2025-08-29 15:16:37.800574 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-08-29 15:16:37.800579 | orchestrator | Friday 29 August 2025 15:13:26 +0000 (0:00:00.132) 0:06:00.012 ********* 2025-08-29 15:16:37.800584 | orchestrator | 2025-08-29 15:16:37.800589 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-08-29 15:16:37.800594 | orchestrator | Friday 29 August 2025 15:13:26 +0000 (0:00:00.128) 0:06:00.141 ********* 2025-08-29 15:16:37.800598 | orchestrator | 2025-08-29 15:16:37.800603 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-08-29 15:16:37.800608 | orchestrator | Friday 29 August 2025 15:13:26 +0000 (0:00:00.128) 0:06:00.269 ********* 2025-08-29 15:16:37.800613 | orchestrator | 2025-08-29 15:16:37.800617 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-08-29 15:16:37.800622 | orchestrator | Friday 29 August 2025 15:13:26 +0000 (0:00:00.291) 0:06:00.561 ********* 2025-08-29 15:16:37.800627 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:16:37.800632 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:16:37.800636 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:16:37.800641 | orchestrator | 2025-08-29 15:16:37.800646 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-08-29 15:16:37.800651 | orchestrator | Friday 29 August 2025 15:13:34 +0000 (0:00:07.654) 0:06:08.216 ********* 2025-08-29 15:16:37.800656 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:16:37.800660 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:16:37.800665 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:16:37.800670 | orchestrator | 2025-08-29 15:16:37.800675 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-08-29 15:16:37.800680 | orchestrator | Friday 29 August 2025 15:13:49 +0000 (0:00:15.041) 0:06:23.257 ********* 2025-08-29 15:16:37.800685 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:16:37.800689 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:16:37.800694 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:16:37.800699 | orchestrator | 2025-08-29 15:16:37.800704 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-08-29 15:16:37.800708 | orchestrator | Friday 29 August 2025 15:14:09 +0000 (0:00:20.324) 0:06:43.581 ********* 2025-08-29 15:16:37.800713 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:16:37.800718 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:16:37.800723 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:16:37.800727 | orchestrator | 2025-08-29 15:16:37.800732 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-08-29 15:16:37.800737 | orchestrator | Friday 29 August 2025 15:14:55 +0000 (0:00:45.408) 0:07:28.990 ********* 2025-08-29 15:16:37.800742 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:16:37.800747 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:16:37.800751 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:16:37.800756 | orchestrator | 2025-08-29 15:16:37.800761 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-08-29 15:16:37.800766 | orchestrator | Friday 29 August 2025 15:14:56 +0000 (0:00:01.380) 0:07:30.370 ********* 2025-08-29 15:16:37.800770 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:16:37.800775 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:16:37.800780 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:16:37.800785 | orchestrator | 2025-08-29 15:16:37.800789 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-08-29 15:16:37.800802 | orchestrator | Friday 29 August 2025 15:14:57 +0000 (0:00:00.917) 0:07:31.287 ********* 2025-08-29 15:16:37.800807 | orchestrator | changed: [testbed-node-5] 2025-08-29 15:16:37.800812 | orchestrator | changed: [testbed-node-4] 2025-08-29 15:16:37.800817 | orchestrator | changed: [testbed-node-3] 2025-08-29 15:16:37.800822 | orchestrator | 2025-08-29 15:16:37.800827 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-08-29 15:16:37.800831 | orchestrator | Friday 29 August 2025 15:15:26 +0000 (0:00:29.107) 0:08:00.395 ********* 2025-08-29 15:16:37.800836 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:16:37.800841 | orchestrator | 2025-08-29 15:16:37.800846 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-08-29 15:16:37.800851 | orchestrator | Friday 29 August 2025 15:15:26 +0000 (0:00:00.128) 0:08:00.523 ********* 2025-08-29 15:16:37.800855 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:37.800860 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:16:37.800865 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:37.800870 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:37.800874 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:16:37.800879 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-08-29 15:16:37.800884 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-08-29 15:16:37.800889 | orchestrator | 2025-08-29 15:16:37.800894 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-08-29 15:16:37.800898 | orchestrator | Friday 29 August 2025 15:15:49 +0000 (0:00:22.714) 0:08:23.238 ********* 2025-08-29 15:16:37.800903 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:37.800908 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:37.800913 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:16:37.800917 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:16:37.800922 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:37.800927 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:16:37.800932 | orchestrator | 2025-08-29 15:16:37.800939 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-08-29 15:16:37.800944 | orchestrator | Friday 29 August 2025 15:15:58 +0000 (0:00:08.540) 0:08:31.778 ********* 2025-08-29 15:16:37.800948 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:16:37.800953 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:37.800967 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:16:37.800972 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:37.800977 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:37.800982 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2025-08-29 15:16:37.800987 | orchestrator | 2025-08-29 15:16:37.800992 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-08-29 15:16:37.800996 | orchestrator | Friday 29 August 2025 15:16:02 +0000 (0:00:04.020) 0:08:35.799 ********* 2025-08-29 15:16:37.801001 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-08-29 15:16:37.801006 | orchestrator | 2025-08-29 15:16:37.801011 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-08-29 15:16:37.801016 | orchestrator | Friday 29 August 2025 15:16:14 +0000 (0:00:12.585) 0:08:48.384 ********* 2025-08-29 15:16:37.801021 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-08-29 15:16:37.801026 | orchestrator | 2025-08-29 15:16:37.801030 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-08-29 15:16:37.801035 | orchestrator | Friday 29 August 2025 15:16:15 +0000 (0:00:01.352) 0:08:49.737 ********* 2025-08-29 15:16:37.801040 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:16:37.801045 | orchestrator | 2025-08-29 15:16:37.801050 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-08-29 15:16:37.801054 | orchestrator | Friday 29 August 2025 15:16:17 +0000 (0:00:01.371) 0:08:51.109 ********* 2025-08-29 15:16:37.801063 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-08-29 15:16:37.801068 | orchestrator | 2025-08-29 15:16:37.801072 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-08-29 15:16:37.801077 | orchestrator | Friday 29 August 2025 15:16:27 +0000 (0:00:10.401) 0:09:01.510 ********* 2025-08-29 15:16:37.801082 | orchestrator | ok: [testbed-node-3] 2025-08-29 15:16:37.801087 | orchestrator | ok: [testbed-node-4] 2025-08-29 15:16:37.801092 | orchestrator | ok: [testbed-node-5] 2025-08-29 15:16:37.801097 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:16:37.801101 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:16:37.801106 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:16:37.801111 | orchestrator | 2025-08-29 15:16:37.801116 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-08-29 15:16:37.801121 | orchestrator | 2025-08-29 15:16:37.801126 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-08-29 15:16:37.801130 | orchestrator | Friday 29 August 2025 15:16:29 +0000 (0:00:01.775) 0:09:03.285 ********* 2025-08-29 15:16:37.801135 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:16:37.801140 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:16:37.801145 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:16:37.801150 | orchestrator | 2025-08-29 15:16:37.801155 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-08-29 15:16:37.801159 | orchestrator | 2025-08-29 15:16:37.801164 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-08-29 15:16:37.801169 | orchestrator | Friday 29 August 2025 15:16:30 +0000 (0:00:01.169) 0:09:04.454 ********* 2025-08-29 15:16:37.801174 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:37.801179 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:37.801183 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:37.801188 | orchestrator | 2025-08-29 15:16:37.801193 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-08-29 15:16:37.801198 | orchestrator | 2025-08-29 15:16:37.801203 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-08-29 15:16:37.801208 | orchestrator | Friday 29 August 2025 15:16:31 +0000 (0:00:00.543) 0:09:04.998 ********* 2025-08-29 15:16:37.801212 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-08-29 15:16:37.801220 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-08-29 15:16:37.801225 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-08-29 15:16:37.801230 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-08-29 15:16:37.801235 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-08-29 15:16:37.801240 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-08-29 15:16:37.801245 | orchestrator | skipping: [testbed-node-3] 2025-08-29 15:16:37.801249 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-08-29 15:16:37.801254 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-08-29 15:16:37.801259 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-08-29 15:16:37.801264 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-08-29 15:16:37.801269 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-08-29 15:16:37.801274 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-08-29 15:16:37.801278 | orchestrator | skipping: [testbed-node-4] 2025-08-29 15:16:37.801283 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-08-29 15:16:37.801288 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-08-29 15:16:37.801293 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-08-29 15:16:37.801298 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-08-29 15:16:37.801303 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-08-29 15:16:37.801308 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-08-29 15:16:37.801316 | orchestrator | skipping: [testbed-node-5] 2025-08-29 15:16:37.801321 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-08-29 15:16:37.801325 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-08-29 15:16:37.801330 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-08-29 15:16:37.801339 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-08-29 15:16:37.801344 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-08-29 15:16:37.801349 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-08-29 15:16:37.801353 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:37.801358 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-08-29 15:16:37.801363 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-08-29 15:16:37.801368 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-08-29 15:16:37.801373 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-08-29 15:16:37.801378 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-08-29 15:16:37.801383 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-08-29 15:16:37.801387 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:37.801392 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-08-29 15:16:37.801397 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-08-29 15:16:37.801402 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-08-29 15:16:37.801407 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-08-29 15:16:37.801411 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-08-29 15:16:37.801416 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-08-29 15:16:37.801421 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:37.801426 | orchestrator | 2025-08-29 15:16:37.801431 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-08-29 15:16:37.801436 | orchestrator | 2025-08-29 15:16:37.801440 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-08-29 15:16:37.801445 | orchestrator | Friday 29 August 2025 15:16:32 +0000 (0:00:01.372) 0:09:06.371 ********* 2025-08-29 15:16:37.801450 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-08-29 15:16:37.801455 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-08-29 15:16:37.801460 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:37.801465 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-08-29 15:16:37.801470 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-08-29 15:16:37.801474 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:37.801479 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-08-29 15:16:37.801484 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-08-29 15:16:37.801489 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:37.801493 | orchestrator | 2025-08-29 15:16:37.801498 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-08-29 15:16:37.801503 | orchestrator | 2025-08-29 15:16:37.801508 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-08-29 15:16:37.801513 | orchestrator | Friday 29 August 2025 15:16:33 +0000 (0:00:00.732) 0:09:07.103 ********* 2025-08-29 15:16:37.801518 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:37.801522 | orchestrator | 2025-08-29 15:16:37.801527 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-08-29 15:16:37.801532 | orchestrator | 2025-08-29 15:16:37.801537 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-08-29 15:16:37.801542 | orchestrator | Friday 29 August 2025 15:16:33 +0000 (0:00:00.665) 0:09:07.768 ********* 2025-08-29 15:16:37.801546 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:37.801551 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:37.801559 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:37.801564 | orchestrator | 2025-08-29 15:16:37.801569 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:16:37.801574 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-08-29 15:16:37.801582 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-08-29 15:16:37.801587 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-08-29 15:16:37.801592 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-08-29 15:16:37.801597 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-08-29 15:16:37.801601 | orchestrator | testbed-node-4 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025-08-29 15:16:37.801606 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-08-29 15:16:37.801611 | orchestrator | 2025-08-29 15:16:37.801616 | orchestrator | 2025-08-29 15:16:37.801621 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:16:37.801626 | orchestrator | Friday 29 August 2025 15:16:34 +0000 (0:00:00.431) 0:09:08.200 ********* 2025-08-29 15:16:37.801630 | orchestrator | =============================================================================== 2025-08-29 15:16:37.801635 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 45.41s 2025-08-29 15:16:37.801640 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 30.18s 2025-08-29 15:16:37.801647 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 29.11s 2025-08-29 15:16:37.801652 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.71s 2025-08-29 15:16:37.801657 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 21.98s 2025-08-29 15:16:37.801662 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 20.68s 2025-08-29 15:16:37.801667 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 20.32s 2025-08-29 15:16:37.801671 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 18.29s 2025-08-29 15:16:37.801676 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 15.04s 2025-08-29 15:16:37.801681 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 13.59s 2025-08-29 15:16:37.801686 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.38s 2025-08-29 15:16:37.801691 | orchestrator | nova-cell : Create cell ------------------------------------------------ 12.62s 2025-08-29 15:16:37.801695 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.59s 2025-08-29 15:16:37.801700 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.65s 2025-08-29 15:16:37.801705 | orchestrator | nova : Copying over nova.conf ------------------------------------------ 10.64s 2025-08-29 15:16:37.801710 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 10.40s 2025-08-29 15:16:37.801715 | orchestrator | nova : Restart nova-api container --------------------------------------- 9.59s 2025-08-29 15:16:37.801719 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 9.48s 2025-08-29 15:16:37.801724 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 8.89s 2025-08-29 15:16:37.801729 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 8.54s 2025-08-29 15:16:37.801737 | orchestrator | 2025-08-29 15:16:37 | INFO  | Task f7f68397-c8dd-4d7e-965c-1c18ffb76152 is in state STARTED 2025-08-29 15:16:37.801742 | orchestrator | 2025-08-29 15:16:37 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:16:37.801747 | orchestrator | 2025-08-29 15:16:37 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:40.834119 | orchestrator | 2025-08-29 15:16:40 | INFO  | Task f7f68397-c8dd-4d7e-965c-1c18ffb76152 is in state STARTED 2025-08-29 15:16:40.835193 | orchestrator | 2025-08-29 15:16:40 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:16:40.835380 | orchestrator | 2025-08-29 15:16:40 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:43.870882 | orchestrator | 2025-08-29 15:16:43 | INFO  | Task f7f68397-c8dd-4d7e-965c-1c18ffb76152 is in state STARTED 2025-08-29 15:16:43.872400 | orchestrator | 2025-08-29 15:16:43 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:16:43.872471 | orchestrator | 2025-08-29 15:16:43 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:46.917069 | orchestrator | 2025-08-29 15:16:46 | INFO  | Task f7f68397-c8dd-4d7e-965c-1c18ffb76152 is in state STARTED 2025-08-29 15:16:46.917812 | orchestrator | 2025-08-29 15:16:46 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:16:46.917859 | orchestrator | 2025-08-29 15:16:46 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:49.967412 | orchestrator | 2025-08-29 15:16:49 | INFO  | Task f7f68397-c8dd-4d7e-965c-1c18ffb76152 is in state STARTED 2025-08-29 15:16:49.969654 | orchestrator | 2025-08-29 15:16:49 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:16:49.969702 | orchestrator | 2025-08-29 15:16:49 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:53.011284 | orchestrator | 2025-08-29 15:16:53 | INFO  | Task f7f68397-c8dd-4d7e-965c-1c18ffb76152 is in state STARTED 2025-08-29 15:16:53.011356 | orchestrator | 2025-08-29 15:16:53 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:16:53.011362 | orchestrator | 2025-08-29 15:16:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:56.052991 | orchestrator | 2025-08-29 15:16:56.053124 | orchestrator | 2025-08-29 15:16:56.053135 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:16:56.053146 | orchestrator | 2025-08-29 15:16:56.053153 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:16:56.053160 | orchestrator | Friday 29 August 2025 15:14:28 +0000 (0:00:00.280) 0:00:00.280 ********* 2025-08-29 15:16:56.053166 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:16:56.053174 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:16:56.053180 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:16:56.053186 | orchestrator | 2025-08-29 15:16:56.053193 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:16:56.053199 | orchestrator | Friday 29 August 2025 15:14:29 +0000 (0:00:00.334) 0:00:00.614 ********* 2025-08-29 15:16:56.053205 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-08-29 15:16:56.053214 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-08-29 15:16:56.053241 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-08-29 15:16:56.053253 | orchestrator | 2025-08-29 15:16:56.053264 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-08-29 15:16:56.053276 | orchestrator | 2025-08-29 15:16:56.053287 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-08-29 15:16:56.053299 | orchestrator | Friday 29 August 2025 15:14:29 +0000 (0:00:00.397) 0:00:01.012 ********* 2025-08-29 15:16:56.053335 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:16:56.053348 | orchestrator | 2025-08-29 15:16:56.053359 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-08-29 15:16:56.053370 | orchestrator | Friday 29 August 2025 15:14:30 +0000 (0:00:00.477) 0:00:01.490 ********* 2025-08-29 15:16:56.053650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 15:16:56.053669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 15:16:56.053676 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 15:16:56.053683 | orchestrator | 2025-08-29 15:16:56.053690 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-08-29 15:16:56.053697 | orchestrator | Friday 29 August 2025 15:14:30 +0000 (0:00:00.681) 0:00:02.171 ********* 2025-08-29 15:16:56.053703 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-08-29 15:16:56.053710 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-08-29 15:16:56.053716 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 15:16:56.053723 | orchestrator | 2025-08-29 15:16:56.053729 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-08-29 15:16:56.053735 | orchestrator | Friday 29 August 2025 15:14:31 +0000 (0:00:00.748) 0:00:02.919 ********* 2025-08-29 15:16:56.053742 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:16:56.053748 | orchestrator | 2025-08-29 15:16:56.053755 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-08-29 15:16:56.053761 | orchestrator | Friday 29 August 2025 15:14:32 +0000 (0:00:00.687) 0:00:03.607 ********* 2025-08-29 15:16:56.053780 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 15:16:56.053803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 15:16:56.053810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 15:16:56.053817 | orchestrator | 2025-08-29 15:16:56.053823 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-08-29 15:16:56.053829 | orchestrator | Friday 29 August 2025 15:14:33 +0000 (0:00:01.299) 0:00:04.907 ********* 2025-08-29 15:16:56.053836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 15:16:56.053843 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:56.053850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 15:16:56.053856 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:56.053870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 15:16:56.053881 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:56.053887 | orchestrator | 2025-08-29 15:16:56.053894 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-08-29 15:16:56.053900 | orchestrator | Friday 29 August 2025 15:14:33 +0000 (0:00:00.406) 0:00:05.313 ********* 2025-08-29 15:16:56.053910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 15:16:56.053917 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:56.053923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 15:16:56.053930 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:56.053958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-08-29 15:16:56.053965 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:56.053972 | orchestrator | 2025-08-29 15:16:56.053978 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-08-29 15:16:56.053984 | orchestrator | Friday 29 August 2025 15:14:34 +0000 (0:00:01.007) 0:00:06.321 ********* 2025-08-29 15:16:56.053990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 15:16:56.053997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 15:16:56.054111 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 15:16:56.054129 | orchestrator | 2025-08-29 15:16:56.054140 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-08-29 15:16:56.054150 | orchestrator | Friday 29 August 2025 15:14:36 +0000 (0:00:01.287) 0:00:07.608 ********* 2025-08-29 15:16:56.054372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 15:16:56.054386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 15:16:56.054393 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 15:16:56.054400 | orchestrator | 2025-08-29 15:16:56.054406 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-08-29 15:16:56.054412 | orchestrator | Friday 29 August 2025 15:14:37 +0000 (0:00:01.328) 0:00:08.937 ********* 2025-08-29 15:16:56.054419 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:56.054425 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:56.054431 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:56.054437 | orchestrator | 2025-08-29 15:16:56.054443 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-08-29 15:16:56.054450 | orchestrator | Friday 29 August 2025 15:14:38 +0000 (0:00:00.500) 0:00:09.438 ********* 2025-08-29 15:16:56.054456 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-08-29 15:16:56.054471 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-08-29 15:16:56.054477 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-08-29 15:16:56.054483 | orchestrator | 2025-08-29 15:16:56.054489 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-08-29 15:16:56.054495 | orchestrator | Friday 29 August 2025 15:14:39 +0000 (0:00:01.404) 0:00:10.842 ********* 2025-08-29 15:16:56.054502 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-08-29 15:16:56.054508 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-08-29 15:16:56.054514 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-08-29 15:16:56.054520 | orchestrator | 2025-08-29 15:16:56.054527 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-08-29 15:16:56.054533 | orchestrator | Friday 29 August 2025 15:14:40 +0000 (0:00:01.270) 0:00:12.113 ********* 2025-08-29 15:16:56.054561 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-08-29 15:16:56.054568 | orchestrator | 2025-08-29 15:16:56.054574 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-08-29 15:16:56.054580 | orchestrator | Friday 29 August 2025 15:14:41 +0000 (0:00:00.791) 0:00:12.904 ********* 2025-08-29 15:16:56.054586 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-08-29 15:16:56.054592 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-08-29 15:16:56.054599 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:16:56.054605 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:16:56.054611 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:16:56.054617 | orchestrator | 2025-08-29 15:16:56.054624 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-08-29 15:16:56.054630 | orchestrator | Friday 29 August 2025 15:14:42 +0000 (0:00:00.739) 0:00:13.644 ********* 2025-08-29 15:16:56.054636 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:56.054647 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:56.054653 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:56.054659 | orchestrator | 2025-08-29 15:16:56.054665 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-08-29 15:16:56.054671 | orchestrator | Friday 29 August 2025 15:14:42 +0000 (0:00:00.574) 0:00:14.219 ********* 2025-08-29 15:16:56.054679 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1094885, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.8647096, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.054688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1094885, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.8647096, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.054694 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1094885, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.8647096, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.054706 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1095016, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.883994, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.054730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1095016, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.883994, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.054741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1095016, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.883994, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.054748 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1094902, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.8674967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.054755 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1094902, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.8674967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.054761 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1094902, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.8674967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.054772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1095019, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.886321, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.054779 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1095019, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.886321, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.054805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1095019, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.886321, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.054820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1094966, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.877523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.054827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1094966, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.877523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.054834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1094966, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.877523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.054848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1094999, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.8819818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.054855 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1094999, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.8819818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.054878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1094999, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.8819818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.054889 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1094884, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.862795, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.054897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1094884, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.862795, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.054904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1094884, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.862795, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.054915 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1094893, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.8654938, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.054921 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1094893, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.8654938, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.054928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1094893, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.8654938, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.054993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/gra2025-08-29 15:16:56 | INFO  | Task f7f68397-c8dd-4d7e-965c-1c18ffb76152 is in state SUCCESS 2025-08-29 15:16:56.055010 | orchestrator | fana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1094907, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.867849, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055027 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1094907, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.867849, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055035 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1094907, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.867849, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055048 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1094981, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.8795161, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055055 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1094981, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.8795161, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055063 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1094981, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.8795161, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1095012, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.883464, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055105 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1095012, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.883464, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055113 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1095012, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.883464, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1094896, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.8665528, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055130 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1094896, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.8665528, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055136 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1094896, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.8665528, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055161 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1094994, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.8813608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055172 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1094994, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.8813608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055179 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1094994, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.8813608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1094972, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.8786278, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1094972, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.8786278, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1094972, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.8786278, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055209 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1094922, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.876819, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055220 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1094922, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.876819, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055230 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1094922, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.876819, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055240 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1094918, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.8693292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1094918, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.8693292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055253 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1094918, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.8693292, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055260 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1094983, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.8802462, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1094983, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.8802462, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055282 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1094983, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.8802462, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055293 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1094911, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.8689108, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1094911, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.8689108, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1094911, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.8689108, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1095008, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.8826342, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1095008, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.8826342, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055332 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1095008, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.8826342, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1095253, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9285777, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1095253, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9285777, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055358 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1095253, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9285777, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1095092, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9031928, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055371 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1095092, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9031928, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1095092, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9031928, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1095057, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.8904796, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1095057, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.8904796, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1095057, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.8904796, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055415 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1095145, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9078462, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055421 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1095145, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9078462, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055433 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1095145, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9078462, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1095038, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.887838, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1095038, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.887838, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1095038, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.887838, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055466 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1095203, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.917966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1095203, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.917966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055482 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1095203, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.917966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1095155, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9145386, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055503 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1095155, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9145386, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1095155, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9145386, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1095213, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9185727, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1095213, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9185727, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1095213, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9185727, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055557 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1095247, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9269607, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1095247, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9269607, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055579 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1095247, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9269607, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1095196, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9168272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1095196, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9168272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1095196, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9168272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1095134, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.905707, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1095134, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.905707, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1095134, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.905707, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1095083, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.8964214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1095083, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.8964214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1095083, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.8964214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1095122, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.90461, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1095122, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.90461, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1095122, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.90461, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1095064, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.8950486, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055696 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1095064, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.8950486, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1095064, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.8950486, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1095141, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9062986, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055727 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1095141, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9062986, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055734 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1095141, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9062986, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055740 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1095233, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9244215, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055747 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1095233, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9244215, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1095233, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9244215, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055770 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1095225, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9209645, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1095225, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9209645, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1095225, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9209645, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1095046, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.8885791, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1095046, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.8885791, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1095046, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.8885791, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055821 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1095051, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.889343, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1095051, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.889343, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1095051, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.889343, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055845 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1095190, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.915688, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1095190, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.915688, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055858 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1095190, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.915688, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1095217, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9194386, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1095217, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9194386, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1095217, 'dev': 96, 'nlink': 1, 'atime': 1756453149.0, 'mtime': 1756453149.0, 'ctime': 1756477369.9194386, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-08-29 15:16:56.055896 | orchestrator | 2025-08-29 15:16:56.055905 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-08-29 15:16:56.055916 | orchestrator | Friday 29 August 2025 15:15:21 +0000 (0:00:38.766) 0:00:52.985 ********* 2025-08-29 15:16:56.055927 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 15:16:56.056007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 15:16:56.056015 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-08-29 15:16:56.056028 | orchestrator | 2025-08-29 15:16:56.056035 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-08-29 15:16:56.056041 | orchestrator | Friday 29 August 2025 15:15:22 +0000 (0:00:01.083) 0:00:54.069 ********* 2025-08-29 15:16:56.056048 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:16:56.056054 | orchestrator | 2025-08-29 15:16:56.056060 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-08-29 15:16:56.056067 | orchestrator | Friday 29 August 2025 15:15:25 +0000 (0:00:02.465) 0:00:56.534 ********* 2025-08-29 15:16:56.056073 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:16:56.056079 | orchestrator | 2025-08-29 15:16:56.056085 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-08-29 15:16:56.056092 | orchestrator | Friday 29 August 2025 15:15:27 +0000 (0:00:02.453) 0:00:58.988 ********* 2025-08-29 15:16:56.056098 | orchestrator | 2025-08-29 15:16:56.056104 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-08-29 15:16:56.056115 | orchestrator | Friday 29 August 2025 15:15:27 +0000 (0:00:00.215) 0:00:59.203 ********* 2025-08-29 15:16:56.056121 | orchestrator | 2025-08-29 15:16:56.056128 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-08-29 15:16:56.056134 | orchestrator | Friday 29 August 2025 15:15:28 +0000 (0:00:00.218) 0:00:59.422 ********* 2025-08-29 15:16:56.056140 | orchestrator | 2025-08-29 15:16:56.056146 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-08-29 15:16:56.056152 | orchestrator | Friday 29 August 2025 15:15:28 +0000 (0:00:00.550) 0:00:59.972 ********* 2025-08-29 15:16:56.056159 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:56.056165 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:56.056171 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:16:56.056177 | orchestrator | 2025-08-29 15:16:56.056184 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-08-29 15:16:56.056190 | orchestrator | Friday 29 August 2025 15:15:30 +0000 (0:00:02.284) 0:01:02.256 ********* 2025-08-29 15:16:56.056196 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:56.056210 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:56.056222 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-08-29 15:16:56.056233 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-08-29 15:16:56.056240 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-08-29 15:16:56.056246 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:16:56.056252 | orchestrator | 2025-08-29 15:16:56.056259 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-08-29 15:16:56.056265 | orchestrator | Friday 29 August 2025 15:16:09 +0000 (0:00:38.951) 0:01:41.208 ********* 2025-08-29 15:16:56.056271 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:56.056277 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:16:56.056284 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:16:56.056290 | orchestrator | 2025-08-29 15:16:56.056296 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-08-29 15:16:56.056302 | orchestrator | Friday 29 August 2025 15:16:47 +0000 (0:00:37.826) 0:02:19.034 ********* 2025-08-29 15:16:56.056308 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:16:56.056323 | orchestrator | 2025-08-29 15:16:56.056329 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-08-29 15:16:56.056335 | orchestrator | Friday 29 August 2025 15:16:49 +0000 (0:00:02.079) 0:02:21.114 ********* 2025-08-29 15:16:56.056341 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:56.056347 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:16:56.056354 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:16:56.056360 | orchestrator | 2025-08-29 15:16:56.056366 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-08-29 15:16:56.056372 | orchestrator | Friday 29 August 2025 15:16:50 +0000 (0:00:00.513) 0:02:21.627 ********* 2025-08-29 15:16:56.056380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-08-29 15:16:56.056388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-08-29 15:16:56.056394 | orchestrator | 2025-08-29 15:16:56.056401 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-08-29 15:16:56.056407 | orchestrator | Friday 29 August 2025 15:16:52 +0000 (0:00:02.191) 0:02:23.818 ********* 2025-08-29 15:16:56.056413 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:16:56.056419 | orchestrator | 2025-08-29 15:16:56.056426 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:16:56.056433 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-08-29 15:16:56.056440 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-08-29 15:16:56.056447 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-08-29 15:16:56.056453 | orchestrator | 2025-08-29 15:16:56.056459 | orchestrator | 2025-08-29 15:16:56.056465 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:16:56.056471 | orchestrator | Friday 29 August 2025 15:16:52 +0000 (0:00:00.279) 0:02:24.098 ********* 2025-08-29 15:16:56.056478 | orchestrator | =============================================================================== 2025-08-29 15:16:56.056484 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 38.95s 2025-08-29 15:16:56.056490 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 38.77s 2025-08-29 15:16:56.056496 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 37.83s 2025-08-29 15:16:56.056502 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.47s 2025-08-29 15:16:56.056508 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.45s 2025-08-29 15:16:56.056518 | orchestrator | grafana : Restart first grafana container ------------------------------- 2.28s 2025-08-29 15:16:56.056525 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.19s 2025-08-29 15:16:56.056531 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.08s 2025-08-29 15:16:56.056539 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.40s 2025-08-29 15:16:56.056549 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.33s 2025-08-29 15:16:56.056561 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.30s 2025-08-29 15:16:56.056569 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.29s 2025-08-29 15:16:56.056580 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.27s 2025-08-29 15:16:56.056590 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.08s 2025-08-29 15:16:56.056597 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 1.01s 2025-08-29 15:16:56.056603 | orchestrator | grafana : Flush handlers ------------------------------------------------ 0.98s 2025-08-29 15:16:56.056609 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.79s 2025-08-29 15:16:56.056615 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.75s 2025-08-29 15:16:56.056621 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.74s 2025-08-29 15:16:56.056627 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.69s 2025-08-29 15:16:56.056633 | orchestrator | 2025-08-29 15:16:56 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:16:56.056640 | orchestrator | 2025-08-29 15:16:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:16:59.099136 | orchestrator | 2025-08-29 15:16:59 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:16:59.099237 | orchestrator | 2025-08-29 15:16:59 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:02.154630 | orchestrator | 2025-08-29 15:17:02 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:17:02.154754 | orchestrator | 2025-08-29 15:17:02 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:05.192645 | orchestrator | 2025-08-29 15:17:05 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:17:05.192727 | orchestrator | 2025-08-29 15:17:05 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:08.234305 | orchestrator | 2025-08-29 15:17:08 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:17:08.234393 | orchestrator | 2025-08-29 15:17:08 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:11.279893 | orchestrator | 2025-08-29 15:17:11 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:17:11.280046 | orchestrator | 2025-08-29 15:17:11 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:14.329010 | orchestrator | 2025-08-29 15:17:14 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:17:14.329109 | orchestrator | 2025-08-29 15:17:14 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:17.368403 | orchestrator | 2025-08-29 15:17:17 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:17:17.368513 | orchestrator | 2025-08-29 15:17:17 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:20.407891 | orchestrator | 2025-08-29 15:17:20 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:17:20.408043 | orchestrator | 2025-08-29 15:17:20 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:23.445765 | orchestrator | 2025-08-29 15:17:23 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:17:23.445891 | orchestrator | 2025-08-29 15:17:23 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:26.498656 | orchestrator | 2025-08-29 15:17:26 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:17:26.498776 | orchestrator | 2025-08-29 15:17:26 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:29.551384 | orchestrator | 2025-08-29 15:17:29 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:17:29.551509 | orchestrator | 2025-08-29 15:17:29 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:32.588969 | orchestrator | 2025-08-29 15:17:32 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:17:32.589112 | orchestrator | 2025-08-29 15:17:32 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:35.637317 | orchestrator | 2025-08-29 15:17:35 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:17:35.637427 | orchestrator | 2025-08-29 15:17:35 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:38.683641 | orchestrator | 2025-08-29 15:17:38 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:17:38.683722 | orchestrator | 2025-08-29 15:17:38 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:41.740263 | orchestrator | 2025-08-29 15:17:41 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:17:41.740351 | orchestrator | 2025-08-29 15:17:41 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:44.784832 | orchestrator | 2025-08-29 15:17:44 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:17:44.784956 | orchestrator | 2025-08-29 15:17:44 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:47.824044 | orchestrator | 2025-08-29 15:17:47 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:17:47.824210 | orchestrator | 2025-08-29 15:17:47 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:50.863763 | orchestrator | 2025-08-29 15:17:50 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:17:50.863873 | orchestrator | 2025-08-29 15:17:50 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:53.907308 | orchestrator | 2025-08-29 15:17:53 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:17:53.907379 | orchestrator | 2025-08-29 15:17:53 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:17:56.960168 | orchestrator | 2025-08-29 15:17:56 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:17:56.960297 | orchestrator | 2025-08-29 15:17:56 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:00.000185 | orchestrator | 2025-08-29 15:18:00 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:18:00.000293 | orchestrator | 2025-08-29 15:18:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:03.043880 | orchestrator | 2025-08-29 15:18:03 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:18:03.043977 | orchestrator | 2025-08-29 15:18:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:06.090078 | orchestrator | 2025-08-29 15:18:06 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:18:06.090204 | orchestrator | 2025-08-29 15:18:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:09.132285 | orchestrator | 2025-08-29 15:18:09 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:18:09.132382 | orchestrator | 2025-08-29 15:18:09 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:12.168798 | orchestrator | 2025-08-29 15:18:12 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:18:12.168896 | orchestrator | 2025-08-29 15:18:12 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:15.207244 | orchestrator | 2025-08-29 15:18:15 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:18:15.207342 | orchestrator | 2025-08-29 15:18:15 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:18.254289 | orchestrator | 2025-08-29 15:18:18 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:18:18.254380 | orchestrator | 2025-08-29 15:18:18 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:21.291261 | orchestrator | 2025-08-29 15:18:21 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:18:21.291347 | orchestrator | 2025-08-29 15:18:21 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:24.336634 | orchestrator | 2025-08-29 15:18:24 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:18:24.336761 | orchestrator | 2025-08-29 15:18:24 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:27.382986 | orchestrator | 2025-08-29 15:18:27 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:18:27.383097 | orchestrator | 2025-08-29 15:18:27 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:30.425398 | orchestrator | 2025-08-29 15:18:30 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:18:30.425502 | orchestrator | 2025-08-29 15:18:30 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:33.467984 | orchestrator | 2025-08-29 15:18:33 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:18:33.468080 | orchestrator | 2025-08-29 15:18:33 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:36.520141 | orchestrator | 2025-08-29 15:18:36 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:18:36.520274 | orchestrator | 2025-08-29 15:18:36 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:39.572611 | orchestrator | 2025-08-29 15:18:39 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:18:39.572692 | orchestrator | 2025-08-29 15:18:39 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:42.620028 | orchestrator | 2025-08-29 15:18:42 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:18:42.620149 | orchestrator | 2025-08-29 15:18:42 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:45.661161 | orchestrator | 2025-08-29 15:18:45 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:18:45.661372 | orchestrator | 2025-08-29 15:18:45 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:48.710448 | orchestrator | 2025-08-29 15:18:48 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:18:48.710559 | orchestrator | 2025-08-29 15:18:48 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:51.752841 | orchestrator | 2025-08-29 15:18:51 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:18:51.752929 | orchestrator | 2025-08-29 15:18:51 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:54.802065 | orchestrator | 2025-08-29 15:18:54 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:18:54.802142 | orchestrator | 2025-08-29 15:18:54 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:18:57.855092 | orchestrator | 2025-08-29 15:18:57 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:18:57.855174 | orchestrator | 2025-08-29 15:18:57 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:19:00.899782 | orchestrator | 2025-08-29 15:19:00 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:19:00.899946 | orchestrator | 2025-08-29 15:19:00 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:19:03.950566 | orchestrator | 2025-08-29 15:19:03 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:19:03.950667 | orchestrator | 2025-08-29 15:19:03 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:19:06.999448 | orchestrator | 2025-08-29 15:19:06 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:19:06.999550 | orchestrator | 2025-08-29 15:19:06 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:19:10.042924 | orchestrator | 2025-08-29 15:19:10 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:19:10.043018 | orchestrator | 2025-08-29 15:19:10 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:19:13.086246 | orchestrator | 2025-08-29 15:19:13 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:19:13.086381 | orchestrator | 2025-08-29 15:19:13 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:19:16.134170 | orchestrator | 2025-08-29 15:19:16 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:19:16.134249 | orchestrator | 2025-08-29 15:19:16 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:19:19.177464 | orchestrator | 2025-08-29 15:19:19 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:19:19.177554 | orchestrator | 2025-08-29 15:19:19 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:19:22.217727 | orchestrator | 2025-08-29 15:19:22 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:19:22.217834 | orchestrator | 2025-08-29 15:19:22 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:19:25.263935 | orchestrator | 2025-08-29 15:19:25 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:19:25.264006 | orchestrator | 2025-08-29 15:19:25 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:19:28.306667 | orchestrator | 2025-08-29 15:19:28 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:19:28.306759 | orchestrator | 2025-08-29 15:19:28 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:19:31.352700 | orchestrator | 2025-08-29 15:19:31 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state STARTED 2025-08-29 15:19:31.352856 | orchestrator | 2025-08-29 15:19:31 | INFO  | Wait 1 second(s) until the next check 2025-08-29 15:19:34.400081 | orchestrator | 2025-08-29 15:19:34 | INFO  | Task eb5a1d44-8e87-4444-911f-535416802409 is in state SUCCESS 2025-08-29 15:19:34.401150 | orchestrator | 2025-08-29 15:19:34.401200 | orchestrator | 2025-08-29 15:19:34.401220 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-08-29 15:19:34.401238 | orchestrator | 2025-08-29 15:19:34.401255 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-08-29 15:19:34.401272 | orchestrator | Friday 29 August 2025 15:14:35 +0000 (0:00:00.281) 0:00:00.281 ********* 2025-08-29 15:19:34.401288 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:19:34.401369 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:19:34.401545 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:19:34.401574 | orchestrator | 2025-08-29 15:19:34.401594 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-08-29 15:19:34.401633 | orchestrator | Friday 29 August 2025 15:14:36 +0000 (0:00:00.338) 0:00:00.620 ********* 2025-08-29 15:19:34.401651 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-08-29 15:19:34.401670 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-08-29 15:19:34.401689 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-08-29 15:19:34.401742 | orchestrator | 2025-08-29 15:19:34.401765 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-08-29 15:19:34.402727 | orchestrator | 2025-08-29 15:19:34.402761 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-08-29 15:19:34.402779 | orchestrator | Friday 29 August 2025 15:14:36 +0000 (0:00:00.450) 0:00:01.071 ********* 2025-08-29 15:19:34.402798 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:19:34.402818 | orchestrator | 2025-08-29 15:19:34.402837 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-08-29 15:19:34.402855 | orchestrator | Friday 29 August 2025 15:14:37 +0000 (0:00:00.590) 0:00:01.661 ********* 2025-08-29 15:19:34.402875 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-08-29 15:19:34.402888 | orchestrator | 2025-08-29 15:19:34.402899 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-08-29 15:19:34.402911 | orchestrator | Friday 29 August 2025 15:14:40 +0000 (0:00:03.588) 0:00:05.250 ********* 2025-08-29 15:19:34.402921 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-08-29 15:19:34.402933 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-08-29 15:19:34.402944 | orchestrator | 2025-08-29 15:19:34.402955 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-08-29 15:19:34.402965 | orchestrator | Friday 29 August 2025 15:14:47 +0000 (0:00:06.809) 0:00:12.059 ********* 2025-08-29 15:19:34.402976 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-08-29 15:19:34.402987 | orchestrator | 2025-08-29 15:19:34.402998 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-08-29 15:19:34.403008 | orchestrator | Friday 29 August 2025 15:14:50 +0000 (0:00:03.431) 0:00:15.490 ********* 2025-08-29 15:19:34.403019 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-08-29 15:19:34.403030 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-08-29 15:19:34.403041 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-08-29 15:19:34.403052 | orchestrator | 2025-08-29 15:19:34.403063 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-08-29 15:19:34.403074 | orchestrator | Friday 29 August 2025 15:14:59 +0000 (0:00:08.477) 0:00:23.968 ********* 2025-08-29 15:19:34.403085 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-08-29 15:19:34.403095 | orchestrator | 2025-08-29 15:19:34.403106 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-08-29 15:19:34.403117 | orchestrator | Friday 29 August 2025 15:15:02 +0000 (0:00:03.400) 0:00:27.368 ********* 2025-08-29 15:19:34.403128 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-08-29 15:19:34.403138 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-08-29 15:19:34.403149 | orchestrator | 2025-08-29 15:19:34.403159 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-08-29 15:19:34.403170 | orchestrator | Friday 29 August 2025 15:15:10 +0000 (0:00:07.844) 0:00:35.212 ********* 2025-08-29 15:19:34.403180 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-08-29 15:19:34.403191 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-08-29 15:19:34.403201 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-08-29 15:19:34.403212 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-08-29 15:19:34.403223 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-08-29 15:19:34.403233 | orchestrator | 2025-08-29 15:19:34.403244 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-08-29 15:19:34.403255 | orchestrator | Friday 29 August 2025 15:15:27 +0000 (0:00:16.429) 0:00:51.642 ********* 2025-08-29 15:19:34.403282 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:19:34.403294 | orchestrator | 2025-08-29 15:19:34.403305 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-08-29 15:19:34.403349 | orchestrator | Friday 29 August 2025 15:15:28 +0000 (0:00:01.421) 0:00:53.064 ********* 2025-08-29 15:19:34.403361 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:19:34.403373 | orchestrator | 2025-08-29 15:19:34.403385 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2025-08-29 15:19:34.403402 | orchestrator | Friday 29 August 2025 15:15:33 +0000 (0:00:04.991) 0:00:58.056 ********* 2025-08-29 15:19:34.403423 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:19:34.403441 | orchestrator | 2025-08-29 15:19:34.403460 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-08-29 15:19:34.403549 | orchestrator | Friday 29 August 2025 15:15:38 +0000 (0:00:04.780) 0:01:02.837 ********* 2025-08-29 15:19:34.403573 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:19:34.403593 | orchestrator | 2025-08-29 15:19:34.403608 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2025-08-29 15:19:34.403619 | orchestrator | Friday 29 August 2025 15:15:41 +0000 (0:00:03.394) 0:01:06.232 ********* 2025-08-29 15:19:34.403630 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-08-29 15:19:34.403641 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-08-29 15:19:34.403652 | orchestrator | 2025-08-29 15:19:34.403662 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2025-08-29 15:19:34.403685 | orchestrator | Friday 29 August 2025 15:15:52 +0000 (0:00:10.959) 0:01:17.191 ********* 2025-08-29 15:19:34.403697 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2025-08-29 15:19:34.403708 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2025-08-29 15:19:34.403721 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2025-08-29 15:19:34.403733 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2025-08-29 15:19:34.403744 | orchestrator | 2025-08-29 15:19:34.403755 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2025-08-29 15:19:34.403766 | orchestrator | Friday 29 August 2025 15:16:08 +0000 (0:00:16.322) 0:01:33.513 ********* 2025-08-29 15:19:34.403776 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:19:34.403787 | orchestrator | 2025-08-29 15:19:34.403798 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2025-08-29 15:19:34.403809 | orchestrator | Friday 29 August 2025 15:16:13 +0000 (0:00:04.644) 0:01:38.158 ********* 2025-08-29 15:19:34.403819 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:19:34.403830 | orchestrator | 2025-08-29 15:19:34.403841 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2025-08-29 15:19:34.403851 | orchestrator | Friday 29 August 2025 15:16:19 +0000 (0:00:05.570) 0:01:43.728 ********* 2025-08-29 15:19:34.403862 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:19:34.403873 | orchestrator | 2025-08-29 15:19:34.403883 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2025-08-29 15:19:34.403894 | orchestrator | Friday 29 August 2025 15:16:19 +0000 (0:00:00.249) 0:01:43.978 ********* 2025-08-29 15:19:34.403905 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:19:34.403915 | orchestrator | 2025-08-29 15:19:34.403926 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-08-29 15:19:34.403936 | orchestrator | Friday 29 August 2025 15:16:25 +0000 (0:00:05.665) 0:01:49.643 ********* 2025-08-29 15:19:34.403947 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:19:34.403969 | orchestrator | 2025-08-29 15:19:34.403980 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2025-08-29 15:19:34.403990 | orchestrator | Friday 29 August 2025 15:16:26 +0000 (0:00:01.014) 0:01:50.657 ********* 2025-08-29 15:19:34.404001 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:19:34.404012 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:19:34.404023 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:19:34.404038 | orchestrator | 2025-08-29 15:19:34.404057 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2025-08-29 15:19:34.404076 | orchestrator | Friday 29 August 2025 15:16:31 +0000 (0:00:05.028) 0:01:55.686 ********* 2025-08-29 15:19:34.404093 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:19:34.404109 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:19:34.404125 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:19:34.404141 | orchestrator | 2025-08-29 15:19:34.404156 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2025-08-29 15:19:34.404171 | orchestrator | Friday 29 August 2025 15:16:35 +0000 (0:00:04.368) 0:02:00.055 ********* 2025-08-29 15:19:34.404187 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:19:34.404203 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:19:34.404221 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:19:34.404238 | orchestrator | 2025-08-29 15:19:34.404255 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2025-08-29 15:19:34.404272 | orchestrator | Friday 29 August 2025 15:16:36 +0000 (0:00:00.777) 0:02:00.832 ********* 2025-08-29 15:19:34.404290 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:19:34.404308 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:19:34.404400 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:19:34.404417 | orchestrator | 2025-08-29 15:19:34.404435 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2025-08-29 15:19:34.404451 | orchestrator | Friday 29 August 2025 15:16:38 +0000 (0:00:02.080) 0:02:02.913 ********* 2025-08-29 15:19:34.404468 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:19:34.404485 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:19:34.404503 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:19:34.404520 | orchestrator | 2025-08-29 15:19:34.404536 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2025-08-29 15:19:34.404551 | orchestrator | Friday 29 August 2025 15:16:39 +0000 (0:00:01.246) 0:02:04.159 ********* 2025-08-29 15:19:34.404568 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:19:34.404585 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:19:34.404602 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:19:34.404618 | orchestrator | 2025-08-29 15:19:34.404634 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2025-08-29 15:19:34.404652 | orchestrator | Friday 29 August 2025 15:16:40 +0000 (0:00:01.176) 0:02:05.335 ********* 2025-08-29 15:19:34.404669 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:19:34.404687 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:19:34.404704 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:19:34.404721 | orchestrator | 2025-08-29 15:19:34.404811 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2025-08-29 15:19:34.404836 | orchestrator | Friday 29 August 2025 15:16:42 +0000 (0:00:02.073) 0:02:07.408 ********* 2025-08-29 15:19:34.404853 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:19:34.404870 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:19:34.404885 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:19:34.404902 | orchestrator | 2025-08-29 15:19:34.404917 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2025-08-29 15:19:34.404931 | orchestrator | Friday 29 August 2025 15:16:44 +0000 (0:00:01.571) 0:02:08.980 ********* 2025-08-29 15:19:34.404946 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:19:34.404973 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:19:34.404990 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:19:34.405006 | orchestrator | 2025-08-29 15:19:34.405021 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2025-08-29 15:19:34.405093 | orchestrator | Friday 29 August 2025 15:16:45 +0000 (0:00:00.916) 0:02:09.896 ********* 2025-08-29 15:19:34.405110 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:19:34.405125 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:19:34.405141 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:19:34.405158 | orchestrator | 2025-08-29 15:19:34.405175 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-08-29 15:19:34.405190 | orchestrator | Friday 29 August 2025 15:16:48 +0000 (0:00:02.753) 0:02:12.650 ********* 2025-08-29 15:19:34.405206 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:19:34.405221 | orchestrator | 2025-08-29 15:19:34.405237 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2025-08-29 15:19:34.405252 | orchestrator | Friday 29 August 2025 15:16:48 +0000 (0:00:00.538) 0:02:13.188 ********* 2025-08-29 15:19:34.405266 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:19:34.405281 | orchestrator | 2025-08-29 15:19:34.405297 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-08-29 15:19:34.405339 | orchestrator | Friday 29 August 2025 15:16:52 +0000 (0:00:04.252) 0:02:17.441 ********* 2025-08-29 15:19:34.405356 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:19:34.405370 | orchestrator | 2025-08-29 15:19:34.405386 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2025-08-29 15:19:34.405402 | orchestrator | Friday 29 August 2025 15:16:55 +0000 (0:00:02.976) 0:02:20.418 ********* 2025-08-29 15:19:34.405418 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-08-29 15:19:34.405435 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-08-29 15:19:34.405451 | orchestrator | 2025-08-29 15:19:34.405466 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2025-08-29 15:19:34.405481 | orchestrator | Friday 29 August 2025 15:17:02 +0000 (0:00:07.070) 0:02:27.488 ********* 2025-08-29 15:19:34.405498 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:19:34.405514 | orchestrator | 2025-08-29 15:19:34.405531 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2025-08-29 15:19:34.405548 | orchestrator | Friday 29 August 2025 15:17:06 +0000 (0:00:03.217) 0:02:30.706 ********* 2025-08-29 15:19:34.405563 | orchestrator | ok: [testbed-node-0] 2025-08-29 15:19:34.405580 | orchestrator | ok: [testbed-node-1] 2025-08-29 15:19:34.405596 | orchestrator | ok: [testbed-node-2] 2025-08-29 15:19:34.405613 | orchestrator | 2025-08-29 15:19:34.405630 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2025-08-29 15:19:34.405647 | orchestrator | Friday 29 August 2025 15:17:06 +0000 (0:00:00.354) 0:02:31.060 ********* 2025-08-29 15:19:34.405668 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 15:19:34.405752 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 15:19:34.405821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 15:19:34.405844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 15:19:34.405863 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 15:19:34.405879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 15:19:34.405896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 15:19:34.405915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 15:19:34.405994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 15:19:34.406060 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 15:19:34.406081 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 15:19:34.406099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 15:19:34.406116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:19:34.406133 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:19:34.406160 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:19:34.406177 | orchestrator | 2025-08-29 15:19:34.406194 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2025-08-29 15:19:34.406211 | orchestrator | Friday 29 August 2025 15:17:08 +0000 (0:00:02.413) 0:02:33.474 ********* 2025-08-29 15:19:34.406228 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:19:34.406245 | orchestrator | 2025-08-29 15:19:34.406383 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2025-08-29 15:19:34.406408 | orchestrator | Friday 29 August 2025 15:17:09 +0000 (0:00:00.145) 0:02:33.620 ********* 2025-08-29 15:19:34.406423 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:19:34.406438 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:19:34.406453 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:19:34.406469 | orchestrator | 2025-08-29 15:19:34.406484 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2025-08-29 15:19:34.406499 | orchestrator | Friday 29 August 2025 15:17:09 +0000 (0:00:00.506) 0:02:34.127 ********* 2025-08-29 15:19:34.406526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 15:19:34.406543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 15:19:34.406560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 15:19:34.406578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 15:19:34.406608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:19:34.406624 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:19:34.406706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 15:19:34.406728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 15:19:34.406744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 15:19:34.406759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 15:19:34.406774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:19:34.406800 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:19:34.406817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 15:19:34.406880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 15:19:34.406905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 15:19:34.406919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 15:19:34.406932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:19:34.406944 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:19:34.406956 | orchestrator | 2025-08-29 15:19:34.406968 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-08-29 15:19:34.406980 | orchestrator | Friday 29 August 2025 15:17:10 +0000 (0:00:00.712) 0:02:34.839 ********* 2025-08-29 15:19:34.407001 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-08-29 15:19:34.407013 | orchestrator | 2025-08-29 15:19:34.407025 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2025-08-29 15:19:34.407038 | orchestrator | Friday 29 August 2025 15:17:10 +0000 (0:00:00.537) 0:02:35.376 ********* 2025-08-29 15:19:34.407050 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 15:19:34.407103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 15:19:34.407124 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 15:19:34.407136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 15:19:34.407149 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 15:19:34.407201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 15:19:34.407215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 15:19:34.407229 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 15:19:34.407255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 15:19:34.407269 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 15:19:34.407283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 15:19:34.407305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 15:19:34.407344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:19:34.407359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:19:34.407384 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:19:34.407398 | orchestrator | 2025-08-29 15:19:34.407411 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2025-08-29 15:19:34.407425 | orchestrator | Friday 29 August 2025 15:17:16 +0000 (0:00:05.205) 0:02:40.582 ********* 2025-08-29 15:19:34.407444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 15:19:34.407457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 15:19:34.407477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 15:19:34.407491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 15:19:34.407503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:19:34.407514 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:19:34.407535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 15:19:34.407555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 15:19:34.407568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 15:19:34.407589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 15:19:34.407602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:19:34.407615 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:19:34.407629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 15:19:34.407649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 15:19:34.407668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 15:19:34.407680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 15:19:34.407702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:19:34.407714 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:19:34.407726 | orchestrator | 2025-08-29 15:19:34.407738 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2025-08-29 15:19:34.407751 | orchestrator | Friday 29 August 2025 15:17:17 +0000 (0:00:00.953) 0:02:41.536 ********* 2025-08-29 15:19:34.407764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 15:19:34.407777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 15:19:34.407790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 15:19:34.407819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 15:19:34.407833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:19:34.407861 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:19:34.407875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 15:19:34.407889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 15:19:34.407902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 15:19:34.407915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 15:19:34.407938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:19:34.407952 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:19:34.407972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-08-29 15:19:34.407995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-08-29 15:19:34.408008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-08-29 15:19:34.408023 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-08-29 15:19:34.408037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-08-29 15:19:34.408050 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:19:34.408063 | orchestrator | 2025-08-29 15:19:34.408076 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2025-08-29 15:19:34.408089 | orchestrator | Friday 29 August 2025 15:17:17 +0000 (0:00:00.871) 0:02:42.408 ********* 2025-08-29 15:19:34.408121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 15:19:34.408146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 15:19:34.408161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 15:19:34.408174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 15:19:34.408189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 15:19:34.408204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 15:19:34.408232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 15:19:34.408257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 15:19:34.408272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 15:19:34.408287 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 15:19:34.408302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 15:19:34.408333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 15:19:34.408349 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:19:34.408370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:19:34.408379 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:19:34.408387 | orchestrator | 2025-08-29 15:19:34.408396 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2025-08-29 15:19:34.408406 | orchestrator | Friday 29 August 2025 15:17:22 +0000 (0:00:05.124) 0:02:47.532 ********* 2025-08-29 15:19:34.408420 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-08-29 15:19:34.408435 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-08-29 15:19:34.408448 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-08-29 15:19:34.408462 | orchestrator | 2025-08-29 15:19:34.408476 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2025-08-29 15:19:34.408491 | orchestrator | Friday 29 August 2025 15:17:25 +0000 (0:00:02.091) 0:02:49.623 ********* 2025-08-29 15:19:34.408506 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 15:19:34.408521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 15:19:34.408560 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 15:19:34.408577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 15:19:34.408592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 15:19:34.408607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 15:19:34.408622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 15:19:34.408637 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 15:19:34.408652 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 15:19:34.408689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 15:19:34.408705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 15:19:34.408720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 15:19:34.408731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:19:34.408740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:19:34.408748 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:19:34.408765 | orchestrator | 2025-08-29 15:19:34.408773 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2025-08-29 15:19:34.408781 | orchestrator | Friday 29 August 2025 15:17:41 +0000 (0:00:16.315) 0:03:05.938 ********* 2025-08-29 15:19:34.408789 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:19:34.408797 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:19:34.408805 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:19:34.408813 | orchestrator | 2025-08-29 15:19:34.408820 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2025-08-29 15:19:34.408828 | orchestrator | Friday 29 August 2025 15:17:42 +0000 (0:00:01.471) 0:03:07.410 ********* 2025-08-29 15:19:34.408836 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-08-29 15:19:34.408844 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-08-29 15:19:34.408857 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-08-29 15:19:34.408865 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-08-29 15:19:34.408873 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-08-29 15:19:34.408881 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-08-29 15:19:34.408889 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-08-29 15:19:34.408896 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-08-29 15:19:34.408904 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-08-29 15:19:34.408916 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-08-29 15:19:34.408924 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-08-29 15:19:34.408932 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-08-29 15:19:34.408940 | orchestrator | 2025-08-29 15:19:34.408948 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2025-08-29 15:19:34.408955 | orchestrator | Friday 29 August 2025 15:17:48 +0000 (0:00:05.331) 0:03:12.742 ********* 2025-08-29 15:19:34.408963 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-08-29 15:19:34.408971 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-08-29 15:19:34.408979 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-08-29 15:19:34.408987 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-08-29 15:19:34.408994 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-08-29 15:19:34.409002 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-08-29 15:19:34.409010 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-08-29 15:19:34.409018 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-08-29 15:19:34.409026 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-08-29 15:19:34.409034 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-08-29 15:19:34.409042 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-08-29 15:19:34.409050 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-08-29 15:19:34.409057 | orchestrator | 2025-08-29 15:19:34.409065 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2025-08-29 15:19:34.409073 | orchestrator | Friday 29 August 2025 15:17:53 +0000 (0:00:05.270) 0:03:18.013 ********* 2025-08-29 15:19:34.409081 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-08-29 15:19:34.409088 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-08-29 15:19:34.409096 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-08-29 15:19:34.409104 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-08-29 15:19:34.409121 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-08-29 15:19:34.409129 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-08-29 15:19:34.409137 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-08-29 15:19:34.409147 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-08-29 15:19:34.409159 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-08-29 15:19:34.409172 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-08-29 15:19:34.409186 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-08-29 15:19:34.409194 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-08-29 15:19:34.409202 | orchestrator | 2025-08-29 15:19:34.409209 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2025-08-29 15:19:34.409217 | orchestrator | Friday 29 August 2025 15:17:58 +0000 (0:00:05.297) 0:03:23.310 ********* 2025-08-29 15:19:34.409226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 15:19:34.409279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 15:19:34.409290 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-08-29 15:19:34.409298 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 15:19:34.409370 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 15:19:34.409382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-08-29 15:19:34.409390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 15:19:34.409404 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 15:19:34.409417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-08-29 15:19:34.409426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 15:19:34.409434 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 15:19:34.409448 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-08-29 15:19:34.409457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:19:34.409465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:19:34.409480 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-08-29 15:19:34.409488 | orchestrator | 2025-08-29 15:19:34.409496 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-08-29 15:19:34.409504 | orchestrator | Friday 29 August 2025 15:18:02 +0000 (0:00:03.710) 0:03:27.020 ********* 2025-08-29 15:19:34.409512 | orchestrator | skipping: [testbed-node-0] 2025-08-29 15:19:34.409520 | orchestrator | skipping: [testbed-node-1] 2025-08-29 15:19:34.409547 | orchestrator | skipping: [testbed-node-2] 2025-08-29 15:19:34.409556 | orchestrator | 2025-08-29 15:19:34.409564 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2025-08-29 15:19:34.409572 | orchestrator | Friday 29 August 2025 15:18:02 +0000 (0:00:00.319) 0:03:27.340 ********* 2025-08-29 15:19:34.409580 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:19:34.409588 | orchestrator | 2025-08-29 15:19:34.409596 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2025-08-29 15:19:34.409603 | orchestrator | Friday 29 August 2025 15:18:05 +0000 (0:00:02.252) 0:03:29.593 ********* 2025-08-29 15:19:34.409611 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:19:34.409618 | orchestrator | 2025-08-29 15:19:34.409629 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2025-08-29 15:19:34.409636 | orchestrator | Friday 29 August 2025 15:18:07 +0000 (0:00:02.040) 0:03:31.633 ********* 2025-08-29 15:19:34.409643 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:19:34.409650 | orchestrator | 2025-08-29 15:19:34.409656 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2025-08-29 15:19:34.409664 | orchestrator | Friday 29 August 2025 15:18:09 +0000 (0:00:02.099) 0:03:33.732 ********* 2025-08-29 15:19:34.409670 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:19:34.409677 | orchestrator | 2025-08-29 15:19:34.409683 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2025-08-29 15:19:34.409690 | orchestrator | Friday 29 August 2025 15:18:11 +0000 (0:00:02.173) 0:03:35.906 ********* 2025-08-29 15:19:34.409697 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:19:34.409704 | orchestrator | 2025-08-29 15:19:34.409710 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-08-29 15:19:34.409717 | orchestrator | Friday 29 August 2025 15:18:33 +0000 (0:00:21.672) 0:03:57.578 ********* 2025-08-29 15:19:34.409724 | orchestrator | 2025-08-29 15:19:34.409731 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-08-29 15:19:34.409737 | orchestrator | Friday 29 August 2025 15:18:33 +0000 (0:00:00.073) 0:03:57.652 ********* 2025-08-29 15:19:34.409744 | orchestrator | 2025-08-29 15:19:34.409750 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-08-29 15:19:34.409757 | orchestrator | Friday 29 August 2025 15:18:33 +0000 (0:00:00.066) 0:03:57.719 ********* 2025-08-29 15:19:34.409764 | orchestrator | 2025-08-29 15:19:34.409771 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2025-08-29 15:19:34.409777 | orchestrator | Friday 29 August 2025 15:18:33 +0000 (0:00:00.063) 0:03:57.782 ********* 2025-08-29 15:19:34.409784 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:19:34.409791 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:19:34.409798 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:19:34.409804 | orchestrator | 2025-08-29 15:19:34.409811 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2025-08-29 15:19:34.409817 | orchestrator | Friday 29 August 2025 15:18:49 +0000 (0:00:16.090) 0:04:13.873 ********* 2025-08-29 15:19:34.409824 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:19:34.409831 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:19:34.409838 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:19:34.409844 | orchestrator | 2025-08-29 15:19:34.409851 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2025-08-29 15:19:34.409858 | orchestrator | Friday 29 August 2025 15:19:00 +0000 (0:00:11.566) 0:04:25.440 ********* 2025-08-29 15:19:34.409864 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:19:34.409871 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:19:34.409878 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:19:34.409885 | orchestrator | 2025-08-29 15:19:34.409892 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2025-08-29 15:19:34.409898 | orchestrator | Friday 29 August 2025 15:19:11 +0000 (0:00:10.426) 0:04:35.866 ********* 2025-08-29 15:19:34.409905 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:19:34.409912 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:19:34.409918 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:19:34.409925 | orchestrator | 2025-08-29 15:19:34.409931 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2025-08-29 15:19:34.409938 | orchestrator | Friday 29 August 2025 15:19:21 +0000 (0:00:10.048) 0:04:45.915 ********* 2025-08-29 15:19:34.409945 | orchestrator | changed: [testbed-node-1] 2025-08-29 15:19:34.409951 | orchestrator | changed: [testbed-node-0] 2025-08-29 15:19:34.409958 | orchestrator | changed: [testbed-node-2] 2025-08-29 15:19:34.409965 | orchestrator | 2025-08-29 15:19:34.409971 | orchestrator | PLAY RECAP ********************************************************************* 2025-08-29 15:19:34.409979 | orchestrator | testbed-node-0 : ok=57  changed=39  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-08-29 15:19:34.409991 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 15:19:34.409998 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-08-29 15:19:34.410005 | orchestrator | 2025-08-29 15:19:34.410012 | orchestrator | 2025-08-29 15:19:34.410045 | orchestrator | TASKS RECAP ******************************************************************** 2025-08-29 15:19:34.410053 | orchestrator | Friday 29 August 2025 15:19:32 +0000 (0:00:10.929) 0:04:56.844 ********* 2025-08-29 15:19:34.410064 | orchestrator | =============================================================================== 2025-08-29 15:19:34.410071 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 21.67s 2025-08-29 15:19:34.410078 | orchestrator | octavia : Adding octavia related roles --------------------------------- 16.43s 2025-08-29 15:19:34.410085 | orchestrator | octavia : Add rules for security groups -------------------------------- 16.32s 2025-08-29 15:19:34.410091 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 16.32s 2025-08-29 15:19:34.410098 | orchestrator | octavia : Restart octavia-api container -------------------------------- 16.09s 2025-08-29 15:19:34.410109 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 11.57s 2025-08-29 15:19:34.410116 | orchestrator | octavia : Create security groups for octavia --------------------------- 10.96s 2025-08-29 15:19:34.410123 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 10.93s 2025-08-29 15:19:34.410129 | orchestrator | octavia : Restart octavia-health-manager container --------------------- 10.43s 2025-08-29 15:19:34.410136 | orchestrator | octavia : Restart octavia-housekeeping container ----------------------- 10.05s 2025-08-29 15:19:34.410143 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.48s 2025-08-29 15:19:34.410150 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.84s 2025-08-29 15:19:34.410156 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.07s 2025-08-29 15:19:34.410163 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.81s 2025-08-29 15:19:34.410170 | orchestrator | octavia : Update loadbalancer management subnet ------------------------- 5.67s 2025-08-29 15:19:34.410177 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.57s 2025-08-29 15:19:34.410183 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.33s 2025-08-29 15:19:34.410190 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.30s 2025-08-29 15:19:34.410197 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.27s 2025-08-29 15:19:34.410204 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.21s 2025-08-29 15:19:34.410210 | orchestrator | 2025-08-29 15:19:34 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:19:37.445257 | orchestrator | 2025-08-29 15:19:37 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:19:40.489429 | orchestrator | 2025-08-29 15:19:40 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:19:43.534496 | orchestrator | 2025-08-29 15:19:43 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:19:46.579430 | orchestrator | 2025-08-29 15:19:46 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:19:49.619933 | orchestrator | 2025-08-29 15:19:49 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:19:52.697507 | orchestrator | 2025-08-29 15:19:52 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:19:55.738965 | orchestrator | 2025-08-29 15:19:55 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:19:58.814676 | orchestrator | 2025-08-29 15:19:58 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:20:01.835046 | orchestrator | 2025-08-29 15:20:01 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:20:04.879528 | orchestrator | 2025-08-29 15:20:04 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:20:07.927949 | orchestrator | 2025-08-29 15:20:07 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:20:10.971591 | orchestrator | 2025-08-29 15:20:10 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:20:14.017668 | orchestrator | 2025-08-29 15:20:14 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:20:17.052724 | orchestrator | 2025-08-29 15:20:17 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:20:20.094522 | orchestrator | 2025-08-29 15:20:20 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:20:23.137053 | orchestrator | 2025-08-29 15:20:23 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:20:26.183823 | orchestrator | 2025-08-29 15:20:26 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:20:29.226005 | orchestrator | 2025-08-29 15:20:29 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:20:32.270836 | orchestrator | 2025-08-29 15:20:32 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-08-29 15:20:35.311400 | orchestrator | 2025-08-29 15:20:35.651409 | orchestrator | 2025-08-29 15:20:35.654969 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Fri Aug 29 15:20:35 UTC 2025 2025-08-29 15:20:35.655042 | orchestrator | 2025-08-29 15:20:36.028257 | orchestrator | ok: Runtime: 0:34:51.888270 2025-08-29 15:20:36.302353 | 2025-08-29 15:20:36.302572 | TASK [Bootstrap services] 2025-08-29 15:20:37.123798 | orchestrator | 2025-08-29 15:20:37.124035 | orchestrator | # BOOTSTRAP 2025-08-29 15:20:37.124070 | orchestrator | 2025-08-29 15:20:37.124093 | orchestrator | + set -e 2025-08-29 15:20:37.124113 | orchestrator | + echo 2025-08-29 15:20:37.124127 | orchestrator | + echo '# BOOTSTRAP' 2025-08-29 15:20:37.124143 | orchestrator | + echo 2025-08-29 15:20:37.124188 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-08-29 15:20:37.132837 | orchestrator | + set -e 2025-08-29 15:20:37.132940 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-08-29 15:20:41.757329 | orchestrator | 2025-08-29 15:20:41 | INFO  | It takes a moment until task 1fa26850-abad-426b-b35e-f5bde97c3f14 (flavor-manager) has been started and output is visible here. 2025-08-29 15:20:45.704228 | orchestrator | ╭───────────────────── Traceback (most recent call last) ──────────────────────╮ 2025-08-29 15:20:45.704302 | orchestrator | │ /usr/local/lib/python3.13/site-packages/openstack_flavor_manager/main.py:179 │ 2025-08-29 15:20:45.704311 | orchestrator | │ in run │ 2025-08-29 15:20:45.704316 | orchestrator | │ │ 2025-08-29 15:20:45.704320 | orchestrator | │ 176 │ logger.add(sys.stderr, format=log_fmt, level=level, colorize=True) │ 2025-08-29 15:20:45.704331 | orchestrator | │ 177 │ │ 2025-08-29 15:20:45.704336 | orchestrator | │ 178 │ definitions = get_flavor_definitions(name, url) │ 2025-08-29 15:20:45.704340 | orchestrator | │ ❱ 179 │ manager = FlavorManager( │ 2025-08-29 15:20:45.704344 | orchestrator | │ 180 │ │ cloud=Cloud(cloud), definitions=definitions, recommended=recom │ 2025-08-29 15:20:45.704348 | orchestrator | │ 181 │ ) │ 2025-08-29 15:20:45.704352 | orchestrator | │ 182 │ manager.run() │ 2025-08-29 15:20:45.704356 | orchestrator | │ │ 2025-08-29 15:20:45.704361 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-08-29 15:20:45.704370 | orchestrator | │ │ cloud = 'admin' │ │ 2025-08-29 15:20:45.704374 | orchestrator | │ │ debug = False │ │ 2025-08-29 15:20:45.704378 | orchestrator | │ │ definitions = { │ │ 2025-08-29 15:20:45.704382 | orchestrator | │ │ │ 'reference': [ │ │ 2025-08-29 15:20:45.704386 | orchestrator | │ │ │ │ {'field': 'name', 'mandatory_prefix': 'SCS-'}, │ │ 2025-08-29 15:20:45.704390 | orchestrator | │ │ │ │ {'field': 'cpus'}, │ │ 2025-08-29 15:20:45.704394 | orchestrator | │ │ │ │ {'field': 'ram'}, │ │ 2025-08-29 15:20:45.704398 | orchestrator | │ │ │ │ {'field': 'disk'}, │ │ 2025-08-29 15:20:45.704402 | orchestrator | │ │ │ │ {'field': 'public', 'default': True}, │ │ 2025-08-29 15:20:45.704406 | orchestrator | │ │ │ │ {'field': 'disabled', 'default': False} │ │ 2025-08-29 15:20:45.704410 | orchestrator | │ │ │ ], │ │ 2025-08-29 15:20:45.704414 | orchestrator | │ │ │ 'mandatory': [ │ │ 2025-08-29 15:20:45.704417 | orchestrator | │ │ │ │ { │ │ 2025-08-29 15:20:45.704421 | orchestrator | │ │ │ │ │ 'name': 'SCS-1L-1', │ │ 2025-08-29 15:20:45.704439 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-08-29 15:20:45.704443 | orchestrator | │ │ │ │ │ 'ram': 1024, │ │ 2025-08-29 15:20:45.704447 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-08-29 15:20:45.704475 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'crowded-core', │ │ 2025-08-29 15:20:45.704480 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 15:20:45.704484 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1L:1', │ │ 2025-08-29 15:20:45.704487 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1L-1', │ │ 2025-08-29 15:20:45.704491 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 15:20:45.704495 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 15:20:45.704499 | orchestrator | │ │ │ │ { │ │ 2025-08-29 15:20:45.704503 | orchestrator | │ │ │ │ │ 'name': 'SCS-1L-1-5', │ │ 2025-08-29 15:20:45.704506 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-08-29 15:20:45.704510 | orchestrator | │ │ │ │ │ 'ram': 1024, │ │ 2025-08-29 15:20:45.704514 | orchestrator | │ │ │ │ │ 'disk': 5, │ │ 2025-08-29 15:20:45.704518 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'crowded-core', │ │ 2025-08-29 15:20:45.704532 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 15:20:45.704537 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1L:5', │ │ 2025-08-29 15:20:45.704541 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1L-5', │ │ 2025-08-29 15:20:45.704544 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 15:20:45.704548 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 15:20:45.704552 | orchestrator | │ │ │ │ { │ │ 2025-08-29 15:20:45.704556 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-2', │ │ 2025-08-29 15:20:45.704562 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-08-29 15:20:45.704566 | orchestrator | │ │ │ │ │ 'ram': 2048, │ │ 2025-08-29 15:20:45.704570 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-08-29 15:20:45.704574 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-08-29 15:20:45.704578 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 15:20:45.704582 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:2', │ │ 2025-08-29 15:20:45.704585 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-2', │ │ 2025-08-29 15:20:45.704589 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 15:20:45.704593 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 15:20:45.704597 | orchestrator | │ │ │ │ { │ │ 2025-08-29 15:20:45.704601 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-2-5', │ │ 2025-08-29 15:20:45.704604 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-08-29 15:20:45.704612 | orchestrator | │ │ │ │ │ 'ram': 2048, │ │ 2025-08-29 15:20:45.704616 | orchestrator | │ │ │ │ │ 'disk': 5, │ │ 2025-08-29 15:20:45.704620 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-08-29 15:20:45.704624 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 15:20:45.704628 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:2:5', │ │ 2025-08-29 15:20:45.704631 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-2-5', │ │ 2025-08-29 15:20:45.704635 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 15:20:45.704639 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 15:20:45.704643 | orchestrator | │ │ │ │ { │ │ 2025-08-29 15:20:45.704646 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-4', │ │ 2025-08-29 15:20:45.704650 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-08-29 15:20:45.704654 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-08-29 15:20:45.704658 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-08-29 15:20:45.704662 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-08-29 15:20:45.704665 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 15:20:45.704669 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:4', │ │ 2025-08-29 15:20:45.704673 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-4', │ │ 2025-08-29 15:20:45.704676 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 15:20:45.704680 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 15:20:45.704684 | orchestrator | │ │ │ │ { │ │ 2025-08-29 15:20:45.704688 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-4-10', │ │ 2025-08-29 15:20:45.704692 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-08-29 15:20:45.704698 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-08-29 15:20:45.704702 | orchestrator | │ │ │ │ │ 'disk': 10, │ │ 2025-08-29 15:20:45.704709 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-08-29 15:20:45.738873 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 15:20:45.738909 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:4:10', │ │ 2025-08-29 15:20:45.738913 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-4-10', │ │ 2025-08-29 15:20:45.738917 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 15:20:45.738921 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 15:20:45.738926 | orchestrator | │ │ │ │ { │ │ 2025-08-29 15:20:45.738930 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-8', │ │ 2025-08-29 15:20:45.738934 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-08-29 15:20:45.738949 | orchestrator | │ │ │ │ │ 'ram': 8192, │ │ 2025-08-29 15:20:45.738954 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-08-29 15:20:45.738957 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-08-29 15:20:45.738961 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 15:20:45.738965 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:8', │ │ 2025-08-29 15:20:45.738968 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-8', │ │ 2025-08-29 15:20:45.738972 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 15:20:45.738976 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 15:20:45.738980 | orchestrator | │ │ │ │ { │ │ 2025-08-29 15:20:45.738983 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-8-20', │ │ 2025-08-29 15:20:45.738987 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-08-29 15:20:45.738991 | orchestrator | │ │ │ │ │ 'ram': 8192, │ │ 2025-08-29 15:20:45.738994 | orchestrator | │ │ │ │ │ 'disk': 20, │ │ 2025-08-29 15:20:45.738998 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-08-29 15:20:45.739002 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 15:20:45.739006 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:8:20', │ │ 2025-08-29 15:20:45.739010 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-8-20', │ │ 2025-08-29 15:20:45.739013 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 15:20:45.739017 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 15:20:45.739021 | orchestrator | │ │ │ │ { │ │ 2025-08-29 15:20:45.739024 | orchestrator | │ │ │ │ │ 'name': 'SCS-2V-4', │ │ 2025-08-29 15:20:45.739028 | orchestrator | │ │ │ │ │ 'cpus': 2, │ │ 2025-08-29 15:20:45.739033 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-08-29 15:20:45.739036 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-08-29 15:20:45.739040 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-08-29 15:20:45.739044 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 15:20:45.739048 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-2V:4', │ │ 2025-08-29 15:20:45.739051 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-2V-4', │ │ 2025-08-29 15:20:45.739055 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 15:20:45.739067 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 15:20:45.739071 | orchestrator | │ │ │ │ { │ │ 2025-08-29 15:20:45.739074 | orchestrator | │ │ │ │ │ 'name': 'SCS-2V-4-10', │ │ 2025-08-29 15:20:45.739078 | orchestrator | │ │ │ │ │ 'cpus': 2, │ │ 2025-08-29 15:20:45.739085 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-08-29 15:20:45.739089 | orchestrator | │ │ │ │ │ 'disk': 10, │ │ 2025-08-29 15:20:45.739101 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-08-29 15:20:45.739105 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 15:20:45.739108 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-2V:4:10', │ │ 2025-08-29 15:20:45.739112 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-2V-4-10', │ │ 2025-08-29 15:20:45.739116 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 15:20:45.739120 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 15:20:45.739124 | orchestrator | │ │ │ │ ... +19 │ │ 2025-08-29 15:20:45.739127 | orchestrator | │ │ │ ] │ │ 2025-08-29 15:20:45.739131 | orchestrator | │ │ } │ │ 2025-08-29 15:20:45.739135 | orchestrator | │ │ level = 'INFO' │ │ 2025-08-29 15:20:45.739139 | orchestrator | │ │ log_fmt = '{time:YYYY-MM-DD HH:mm:ss} | │ │ 2025-08-29 15:20:45.739143 | orchestrator | │ │ {level: <8} | '+17 │ │ 2025-08-29 15:20:45.739146 | orchestrator | │ │ name = 'local' │ │ 2025-08-29 15:20:45.739150 | orchestrator | │ │ recommended = True │ │ 2025-08-29 15:20:45.739154 | orchestrator | │ │ url = None │ │ 2025-08-29 15:20:45.739159 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-08-29 15:20:45.739164 | orchestrator | │ │ 2025-08-29 15:20:45.739168 | orchestrator | │ /usr/local/lib/python3.13/site-packages/openstack_flavor_manager/main.py:97 │ 2025-08-29 15:20:45.739172 | orchestrator | │ in __init__ │ 2025-08-29 15:20:45.739176 | orchestrator | │ │ 2025-08-29 15:20:45.739179 | orchestrator | │ 94 │ │ self.required_flavors = definitions["mandatory"] │ 2025-08-29 15:20:45.739183 | orchestrator | │ 95 │ │ self.cloud = cloud │ 2025-08-29 15:20:45.739187 | orchestrator | │ 96 │ │ if recommended: │ 2025-08-29 15:20:45.739190 | orchestrator | │ ❱ 97 │ │ │ self.required_flavors = self.required_flavors + definition │ 2025-08-29 15:20:45.739194 | orchestrator | │ 98 │ │ │ 2025-08-29 15:20:45.739198 | orchestrator | │ 99 │ │ self.defaults_dict = {} │ 2025-08-29 15:20:45.739202 | orchestrator | │ 100 │ │ for item in definitions["reference"]: │ 2025-08-29 15:20:45.739205 | orchestrator | │ │ 2025-08-29 15:20:45.739212 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-08-29 15:20:45.739217 | orchestrator | │ │ cloud = │ │ 2025-08-29 15:20:45.739256 | orchestrator | │ │ definitions = { │ │ 2025-08-29 15:20:45.739260 | orchestrator | │ │ │ 'reference': [ │ │ 2025-08-29 15:20:45.739263 | orchestrator | │ │ │ │ {'field': 'name', 'mandatory_prefix': 'SCS-'}, │ │ 2025-08-29 15:20:45.739267 | orchestrator | │ │ │ │ {'field': 'cpus'}, │ │ 2025-08-29 15:20:45.739271 | orchestrator | │ │ │ │ {'field': 'ram'}, │ │ 2025-08-29 15:20:45.739275 | orchestrator | │ │ │ │ {'field': 'disk'}, │ │ 2025-08-29 15:20:45.739279 | orchestrator | │ │ │ │ {'field': 'public', 'default': True}, │ │ 2025-08-29 15:20:45.739283 | orchestrator | │ │ │ │ {'field': 'disabled', 'default': False} │ │ 2025-08-29 15:20:45.739286 | orchestrator | │ │ │ ], │ │ 2025-08-29 15:20:45.739290 | orchestrator | │ │ │ 'mandatory': [ │ │ 2025-08-29 15:20:45.739294 | orchestrator | │ │ │ │ { │ │ 2025-08-29 15:20:45.739301 | orchestrator | │ │ │ │ │ 'name': 'SCS-1L-1', │ │ 2025-08-29 15:20:45.765493 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-08-29 15:20:45.765522 | orchestrator | │ │ │ │ │ 'ram': 1024, │ │ 2025-08-29 15:20:45.765526 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-08-29 15:20:45.765530 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'crowded-core', │ │ 2025-08-29 15:20:45.765534 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 15:20:45.765538 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1L:1', │ │ 2025-08-29 15:20:45.765543 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1L-1', │ │ 2025-08-29 15:20:45.765547 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 15:20:45.765551 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 15:20:45.765554 | orchestrator | │ │ │ │ { │ │ 2025-08-29 15:20:45.765558 | orchestrator | │ │ │ │ │ 'name': 'SCS-1L-1-5', │ │ 2025-08-29 15:20:45.765562 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-08-29 15:20:45.765565 | orchestrator | │ │ │ │ │ 'ram': 1024, │ │ 2025-08-29 15:20:45.765569 | orchestrator | │ │ │ │ │ 'disk': 5, │ │ 2025-08-29 15:20:45.765573 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'crowded-core', │ │ 2025-08-29 15:20:45.765576 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 15:20:45.765580 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1L:5', │ │ 2025-08-29 15:20:45.765584 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1L-5', │ │ 2025-08-29 15:20:45.765587 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 15:20:45.765591 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 15:20:45.765604 | orchestrator | │ │ │ │ { │ │ 2025-08-29 15:20:45.765607 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-2', │ │ 2025-08-29 15:20:45.765611 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-08-29 15:20:45.765615 | orchestrator | │ │ │ │ │ 'ram': 2048, │ │ 2025-08-29 15:20:45.765619 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-08-29 15:20:45.765622 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-08-29 15:20:45.765626 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 15:20:45.765630 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:2', │ │ 2025-08-29 15:20:45.765634 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-2', │ │ 2025-08-29 15:20:45.765637 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 15:20:45.765647 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 15:20:45.765651 | orchestrator | │ │ │ │ { │ │ 2025-08-29 15:20:45.765655 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-2-5', │ │ 2025-08-29 15:20:45.765659 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-08-29 15:20:45.765662 | orchestrator | │ │ │ │ │ 'ram': 2048, │ │ 2025-08-29 15:20:45.765666 | orchestrator | │ │ │ │ │ 'disk': 5, │ │ 2025-08-29 15:20:45.765670 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-08-29 15:20:45.765673 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 15:20:45.765677 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:2:5', │ │ 2025-08-29 15:20:45.765681 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-2-5', │ │ 2025-08-29 15:20:45.765684 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 15:20:45.765688 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 15:20:45.765692 | orchestrator | │ │ │ │ { │ │ 2025-08-29 15:20:45.765703 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-4', │ │ 2025-08-29 15:20:45.765707 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-08-29 15:20:45.765711 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-08-29 15:20:45.765715 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-08-29 15:20:45.765718 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-08-29 15:20:45.765722 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 15:20:45.765726 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:4', │ │ 2025-08-29 15:20:45.765730 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-4', │ │ 2025-08-29 15:20:45.765733 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 15:20:45.765737 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 15:20:45.765744 | orchestrator | │ │ │ │ { │ │ 2025-08-29 15:20:45.765747 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-4-10', │ │ 2025-08-29 15:20:45.765751 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-08-29 15:20:45.765755 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-08-29 15:20:45.765759 | orchestrator | │ │ │ │ │ 'disk': 10, │ │ 2025-08-29 15:20:45.765762 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-08-29 15:20:45.765766 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 15:20:45.765770 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:4:10', │ │ 2025-08-29 15:20:45.765773 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-4-10', │ │ 2025-08-29 15:20:45.765777 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 15:20:45.765781 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 15:20:45.765785 | orchestrator | │ │ │ │ { │ │ 2025-08-29 15:20:45.765788 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-8', │ │ 2025-08-29 15:20:45.765792 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-08-29 15:20:45.765796 | orchestrator | │ │ │ │ │ 'ram': 8192, │ │ 2025-08-29 15:20:45.765799 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-08-29 15:20:45.765803 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-08-29 15:20:45.765807 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 15:20:45.765811 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:8', │ │ 2025-08-29 15:20:45.765815 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-8', │ │ 2025-08-29 15:20:45.765819 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 15:20:45.765823 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 15:20:45.765826 | orchestrator | │ │ │ │ { │ │ 2025-08-29 15:20:45.765830 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-8-20', │ │ 2025-08-29 15:20:45.765834 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-08-29 15:20:45.765838 | orchestrator | │ │ │ │ │ 'ram': 8192, │ │ 2025-08-29 15:20:45.765841 | orchestrator | │ │ │ │ │ 'disk': 20, │ │ 2025-08-29 15:20:45.765845 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-08-29 15:20:45.765849 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 15:20:45.765853 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:8:20', │ │ 2025-08-29 15:20:45.765856 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-8-20', │ │ 2025-08-29 15:20:45.765860 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 15:20:45.765864 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 15:20:45.765874 | orchestrator | │ │ │ │ { │ │ 2025-08-29 15:20:45.821103 | orchestrator | │ │ │ │ │ 'name': 'SCS-2V-4', │ │ 2025-08-29 15:20:45.821162 | orchestrator | │ │ │ │ │ 'cpus': 2, │ │ 2025-08-29 15:20:45.821168 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-08-29 15:20:45.821172 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-08-29 15:20:45.821176 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-08-29 15:20:45.821180 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 15:20:45.821184 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-2V:4', │ │ 2025-08-29 15:20:45.821188 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-2V-4', │ │ 2025-08-29 15:20:45.821192 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 15:20:45.821195 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 15:20:45.821199 | orchestrator | │ │ │ │ { │ │ 2025-08-29 15:20:45.821203 | orchestrator | │ │ │ │ │ 'name': 'SCS-2V-4-10', │ │ 2025-08-29 15:20:45.821208 | orchestrator | │ │ │ │ │ 'cpus': 2, │ │ 2025-08-29 15:20:45.821211 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-08-29 15:20:45.821215 | orchestrator | │ │ │ │ │ 'disk': 10, │ │ 2025-08-29 15:20:45.821219 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-08-29 15:20:45.821222 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-08-29 15:20:45.821226 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-2V:4:10', │ │ 2025-08-29 15:20:45.821230 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-2V-4-10', │ │ 2025-08-29 15:20:45.821234 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-08-29 15:20:45.821237 | orchestrator | │ │ │ │ }, │ │ 2025-08-29 15:20:45.821241 | orchestrator | │ │ │ │ ... +19 │ │ 2025-08-29 15:20:45.821245 | orchestrator | │ │ │ ] │ │ 2025-08-29 15:20:45.821249 | orchestrator | │ │ } │ │ 2025-08-29 15:20:45.821253 | orchestrator | │ │ recommended = True │ │ 2025-08-29 15:20:45.821257 | orchestrator | │ │ self = │ │ 2025-08-29 15:20:45.821265 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-08-29 15:20:45.821271 | orchestrator | ╰──────────────────────────────────────────────────────────────────────────────╯ 2025-08-29 15:20:45.821275 | orchestrator | KeyError: 'recommended' 2025-08-29 15:20:46.351851 | orchestrator | ERROR 2025-08-29 15:20:46.352292 | orchestrator | { 2025-08-29 15:20:46.352387 | orchestrator | "delta": "0:00:09.433849", 2025-08-29 15:20:46.352491 | orchestrator | "end": "2025-08-29 15:20:46.146962", 2025-08-29 15:20:46.352547 | orchestrator | "msg": "non-zero return code", 2025-08-29 15:20:46.352596 | orchestrator | "rc": 1, 2025-08-29 15:20:46.352644 | orchestrator | "start": "2025-08-29 15:20:36.713113" 2025-08-29 15:20:46.352693 | orchestrator | } failure 2025-08-29 15:20:46.372885 | 2025-08-29 15:20:46.373031 | PLAY RECAP 2025-08-29 15:20:46.373097 | orchestrator | ok: 22 changed: 9 unreachable: 0 failed: 1 skipped: 3 rescued: 0 ignored: 0 2025-08-29 15:20:46.373129 | 2025-08-29 15:20:46.632878 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-08-29 15:20:46.635272 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-08-29 15:20:48.269305 | 2025-08-29 15:20:48.269557 | PLAY [Post output play] 2025-08-29 15:20:48.299158 | 2025-08-29 15:20:48.299313 | LOOP [stage-output : Register sources] 2025-08-29 15:20:48.374950 | 2025-08-29 15:20:48.375207 | TASK [stage-output : Check sudo] 2025-08-29 15:20:49.377279 | orchestrator | sudo: a password is required 2025-08-29 15:20:49.436465 | orchestrator | ok: Runtime: 0:00:00.141723 2025-08-29 15:20:49.446785 | 2025-08-29 15:20:49.446948 | LOOP [stage-output : Set source and destination for files and folders] 2025-08-29 15:20:49.499713 | 2025-08-29 15:20:49.500075 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-08-29 15:20:49.579827 | orchestrator | ok 2025-08-29 15:20:49.589298 | 2025-08-29 15:20:49.589523 | LOOP [stage-output : Ensure target folders exist] 2025-08-29 15:20:50.075547 | orchestrator | ok: "docs" 2025-08-29 15:20:50.075926 | 2025-08-29 15:20:50.340381 | orchestrator | ok: "artifacts" 2025-08-29 15:20:50.611774 | orchestrator | ok: "logs" 2025-08-29 15:20:50.626559 | 2025-08-29 15:20:50.626709 | LOOP [stage-output : Copy files and folders to staging folder] 2025-08-29 15:20:50.676883 | 2025-08-29 15:20:50.677238 | TASK [stage-output : Make all log files readable] 2025-08-29 15:20:50.985634 | orchestrator | ok 2025-08-29 15:20:50.994534 | 2025-08-29 15:20:50.994670 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-08-29 15:20:51.039649 | orchestrator | skipping: Conditional result was False 2025-08-29 15:20:51.055034 | 2025-08-29 15:20:51.055214 | TASK [stage-output : Discover log files for compression] 2025-08-29 15:20:51.079942 | orchestrator | skipping: Conditional result was False 2025-08-29 15:20:51.088565 | 2025-08-29 15:20:51.088695 | LOOP [stage-output : Archive everything from logs] 2025-08-29 15:20:51.125468 | 2025-08-29 15:20:51.125667 | PLAY [Post cleanup play] 2025-08-29 15:20:51.133700 | 2025-08-29 15:20:51.133850 | TASK [Set cloud fact (Zuul deployment)] 2025-08-29 15:20:51.199183 | orchestrator | ok 2025-08-29 15:20:51.211559 | 2025-08-29 15:20:51.211726 | TASK [Set cloud fact (local deployment)] 2025-08-29 15:20:51.236527 | orchestrator | skipping: Conditional result was False 2025-08-29 15:20:51.251740 | 2025-08-29 15:20:51.251917 | TASK [Clean the cloud environment] 2025-08-29 15:20:53.114005 | orchestrator | 2025-08-29 15:20:53 - clean up servers 2025-08-29 15:20:53.919100 | orchestrator | 2025-08-29 15:20:53 - testbed-manager 2025-08-29 15:20:54.006713 | orchestrator | 2025-08-29 15:20:54 - testbed-node-2 2025-08-29 15:20:54.096007 | orchestrator | 2025-08-29 15:20:54 - testbed-node-4 2025-08-29 15:20:54.187023 | orchestrator | 2025-08-29 15:20:54 - testbed-node-5 2025-08-29 15:20:54.401981 | orchestrator | 2025-08-29 15:20:54 - testbed-node-3 2025-08-29 15:20:54.490346 | orchestrator | 2025-08-29 15:20:54 - testbed-node-0 2025-08-29 15:20:54.577538 | orchestrator | 2025-08-29 15:20:54 - testbed-node-1 2025-08-29 15:20:54.690890 | orchestrator | 2025-08-29 15:20:54 - clean up keypairs 2025-08-29 15:20:54.708267 | orchestrator | 2025-08-29 15:20:54 - testbed 2025-08-29 15:20:54.730003 | orchestrator | 2025-08-29 15:20:54 - wait for servers to be gone 2025-08-29 15:21:05.616786 | orchestrator | 2025-08-29 15:21:05 - clean up ports 2025-08-29 15:21:05.853113 | orchestrator | 2025-08-29 15:21:05 - 1a42ce6d-00d7-4979-bf63-a6a515facda8 2025-08-29 15:21:06.133744 | orchestrator | 2025-08-29 15:21:06 - 3b451fa1-62d8-434a-bd65-3104752597f9 2025-08-29 15:21:06.459984 | orchestrator | 2025-08-29 15:21:06 - 3eaf0dd8-86e3-49f1-a820-72552b85b3a9 2025-08-29 15:21:06.787188 | orchestrator | 2025-08-29 15:21:06 - 407e378d-9056-4ea9-b909-9301da48cb26 2025-08-29 15:21:07.225623 | orchestrator | 2025-08-29 15:21:07 - b039e45b-8aeb-46cb-885c-4069cc65053f 2025-08-29 15:21:07.687359 | orchestrator | 2025-08-29 15:21:07 - c94b015d-cb6c-4acd-9d7e-ae9bdb95aacd 2025-08-29 15:21:08.102913 | orchestrator | 2025-08-29 15:21:08 - d5db6d49-72e1-4526-9fdb-c621f4e3aa33 2025-08-29 15:21:08.336757 | orchestrator | 2025-08-29 15:21:08 - clean up volumes 2025-08-29 15:21:08.467635 | orchestrator | 2025-08-29 15:21:08 - testbed-volume-3-node-base 2025-08-29 15:21:08.506987 | orchestrator | 2025-08-29 15:21:08 - testbed-volume-5-node-base 2025-08-29 15:21:08.552282 | orchestrator | 2025-08-29 15:21:08 - testbed-volume-4-node-base 2025-08-29 15:21:08.591002 | orchestrator | 2025-08-29 15:21:08 - testbed-volume-0-node-base 2025-08-29 15:21:08.634298 | orchestrator | 2025-08-29 15:21:08 - testbed-volume-2-node-base 2025-08-29 15:21:08.678143 | orchestrator | 2025-08-29 15:21:08 - testbed-volume-1-node-base 2025-08-29 15:21:08.720395 | orchestrator | 2025-08-29 15:21:08 - testbed-volume-manager-base 2025-08-29 15:21:08.765749 | orchestrator | 2025-08-29 15:21:08 - testbed-volume-4-node-4 2025-08-29 15:21:08.806814 | orchestrator | 2025-08-29 15:21:08 - testbed-volume-7-node-4 2025-08-29 15:21:08.848632 | orchestrator | 2025-08-29 15:21:08 - testbed-volume-1-node-4 2025-08-29 15:21:08.892170 | orchestrator | 2025-08-29 15:21:08 - testbed-volume-0-node-3 2025-08-29 15:21:08.932697 | orchestrator | 2025-08-29 15:21:08 - testbed-volume-6-node-3 2025-08-29 15:21:08.973435 | orchestrator | 2025-08-29 15:21:08 - testbed-volume-5-node-5 2025-08-29 15:21:09.019328 | orchestrator | 2025-08-29 15:21:09 - testbed-volume-8-node-5 2025-08-29 15:21:09.068557 | orchestrator | 2025-08-29 15:21:09 - testbed-volume-3-node-3 2025-08-29 15:21:09.117750 | orchestrator | 2025-08-29 15:21:09 - testbed-volume-2-node-5 2025-08-29 15:21:09.164602 | orchestrator | 2025-08-29 15:21:09 - disconnect routers 2025-08-29 15:21:09.285595 | orchestrator | 2025-08-29 15:21:09 - testbed 2025-08-29 15:21:10.303655 | orchestrator | 2025-08-29 15:21:10 - clean up subnets 2025-08-29 15:21:10.354846 | orchestrator | 2025-08-29 15:21:10 - subnet-testbed-management 2025-08-29 15:21:10.517719 | orchestrator | 2025-08-29 15:21:10 - clean up networks 2025-08-29 15:21:11.214696 | orchestrator | 2025-08-29 15:21:11 - net-testbed-management 2025-08-29 15:21:11.493156 | orchestrator | 2025-08-29 15:21:11 - clean up security groups 2025-08-29 15:21:11.527907 | orchestrator | 2025-08-29 15:21:11 - testbed-node 2025-08-29 15:21:11.654277 | orchestrator | 2025-08-29 15:21:11 - testbed-management 2025-08-29 15:21:11.778486 | orchestrator | 2025-08-29 15:21:11 - clean up floating ips 2025-08-29 15:21:12.298123 | orchestrator | 2025-08-29 15:21:12 - 81.163.192.226 2025-08-29 15:21:12.705915 | orchestrator | 2025-08-29 15:21:12 - clean up routers 2025-08-29 15:21:12.803270 | orchestrator | 2025-08-29 15:21:12 - testbed 2025-08-29 15:21:14.308315 | orchestrator | ok: Runtime: 0:00:22.541157 2025-08-29 15:21:14.312970 | 2025-08-29 15:21:14.313148 | PLAY RECAP 2025-08-29 15:21:14.313283 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-08-29 15:21:14.313348 | 2025-08-29 15:21:14.474450 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-08-29 15:21:14.475602 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-08-29 15:21:15.256202 | 2025-08-29 15:21:15.256407 | PLAY [Cleanup play] 2025-08-29 15:21:15.274720 | 2025-08-29 15:21:15.274904 | TASK [Set cloud fact (Zuul deployment)] 2025-08-29 15:21:15.326617 | orchestrator | ok 2025-08-29 15:21:15.334047 | 2025-08-29 15:21:15.334229 | TASK [Set cloud fact (local deployment)] 2025-08-29 15:21:15.358867 | orchestrator | skipping: Conditional result was False 2025-08-29 15:21:15.367695 | 2025-08-29 15:21:15.367816 | TASK [Clean the cloud environment] 2025-08-29 15:21:16.538715 | orchestrator | 2025-08-29 15:21:16 - clean up servers 2025-08-29 15:21:17.023237 | orchestrator | 2025-08-29 15:21:17 - clean up keypairs 2025-08-29 15:21:17.043641 | orchestrator | 2025-08-29 15:21:17 - wait for servers to be gone 2025-08-29 15:21:17.091364 | orchestrator | 2025-08-29 15:21:17 - clean up ports 2025-08-29 15:21:17.168721 | orchestrator | 2025-08-29 15:21:17 - clean up volumes 2025-08-29 15:21:17.230127 | orchestrator | 2025-08-29 15:21:17 - disconnect routers 2025-08-29 15:21:17.261426 | orchestrator | 2025-08-29 15:21:17 - clean up subnets 2025-08-29 15:21:17.286970 | orchestrator | 2025-08-29 15:21:17 - clean up networks 2025-08-29 15:21:17.484963 | orchestrator | 2025-08-29 15:21:17 - clean up security groups 2025-08-29 15:21:17.521924 | orchestrator | 2025-08-29 15:21:17 - clean up floating ips 2025-08-29 15:21:17.552211 | orchestrator | 2025-08-29 15:21:17 - clean up routers 2025-08-29 15:21:17.910925 | orchestrator | ok: Runtime: 0:00:01.415987 2025-08-29 15:21:17.913185 | 2025-08-29 15:21:17.913300 | PLAY RECAP 2025-08-29 15:21:17.913364 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-08-29 15:21:17.913394 | 2025-08-29 15:21:18.051084 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-08-29 15:21:18.052191 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-08-29 15:21:18.851344 | 2025-08-29 15:21:18.851564 | PLAY [Base post-fetch] 2025-08-29 15:21:18.867188 | 2025-08-29 15:21:18.867316 | TASK [fetch-output : Set log path for multiple nodes] 2025-08-29 15:21:18.913994 | orchestrator | skipping: Conditional result was False 2025-08-29 15:21:18.921788 | 2025-08-29 15:21:18.921970 | TASK [fetch-output : Set log path for single node] 2025-08-29 15:21:18.962611 | orchestrator | ok 2025-08-29 15:21:18.970785 | 2025-08-29 15:21:18.970937 | LOOP [fetch-output : Ensure local output dirs] 2025-08-29 15:21:19.482337 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/3a100108136040079abb46831c0215f4/work/logs" 2025-08-29 15:21:19.710037 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/3a100108136040079abb46831c0215f4/work/artifacts" 2025-08-29 15:21:19.940964 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/3a100108136040079abb46831c0215f4/work/docs" 2025-08-29 15:21:19.959493 | 2025-08-29 15:21:19.959601 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-08-29 15:21:20.809076 | orchestrator | changed: .d..t...... ./ 2025-08-29 15:21:20.809295 | orchestrator | changed: All items complete 2025-08-29 15:21:20.809330 | 2025-08-29 15:21:21.569504 | orchestrator | changed: .d..t...... ./ 2025-08-29 15:21:22.263337 | orchestrator | changed: .d..t...... ./ 2025-08-29 15:21:22.285866 | 2025-08-29 15:21:22.285983 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-08-29 15:21:22.308179 | orchestrator | skipping: Conditional result was False 2025-08-29 15:21:22.311379 | orchestrator | skipping: Conditional result was False 2025-08-29 15:21:22.334217 | 2025-08-29 15:21:22.334322 | PLAY RECAP 2025-08-29 15:21:22.334400 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-08-29 15:21:22.334484 | 2025-08-29 15:21:22.428321 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-08-29 15:21:22.429983 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-08-29 15:21:23.102611 | 2025-08-29 15:21:23.102746 | PLAY [Base post] 2025-08-29 15:21:23.115867 | 2025-08-29 15:21:23.115982 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-08-29 15:21:24.261005 | orchestrator | changed 2025-08-29 15:21:24.272342 | 2025-08-29 15:21:24.272517 | PLAY RECAP 2025-08-29 15:21:24.272616 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-08-29 15:21:24.272717 | 2025-08-29 15:21:24.364719 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-08-29 15:21:24.367027 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-08-29 15:21:25.092754 | 2025-08-29 15:21:25.092890 | PLAY [Base post-logs] 2025-08-29 15:21:25.102277 | 2025-08-29 15:21:25.102391 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-08-29 15:21:25.512021 | localhost | changed 2025-08-29 15:21:25.527715 | 2025-08-29 15:21:25.527840 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-08-29 15:21:25.563303 | localhost | ok 2025-08-29 15:21:25.566819 | 2025-08-29 15:21:25.566935 | TASK [Set zuul-log-path fact] 2025-08-29 15:21:25.581510 | localhost | ok 2025-08-29 15:21:25.589153 | 2025-08-29 15:21:25.589240 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-08-29 15:21:25.613485 | localhost | ok 2025-08-29 15:21:25.616642 | 2025-08-29 15:21:25.616737 | TASK [upload-logs : Create log directories] 2025-08-29 15:21:26.068790 | localhost | changed 2025-08-29 15:21:26.072557 | 2025-08-29 15:21:26.072679 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-08-29 15:21:26.515943 | localhost -> localhost | ok: Runtime: 0:00:00.006905 2025-08-29 15:21:26.519736 | 2025-08-29 15:21:26.519833 | TASK [upload-logs : Upload logs to log server] 2025-08-29 15:21:27.013621 | localhost | Output suppressed because no_log was given 2025-08-29 15:21:27.017601 | 2025-08-29 15:21:27.017794 | LOOP [upload-logs : Compress console log and json output] 2025-08-29 15:21:27.070033 | localhost | skipping: Conditional result was False 2025-08-29 15:21:27.076845 | localhost | skipping: Conditional result was False 2025-08-29 15:21:27.088511 | 2025-08-29 15:21:27.088708 | LOOP [upload-logs : Upload compressed console log and json output] 2025-08-29 15:21:27.137599 | localhost | skipping: Conditional result was False 2025-08-29 15:21:27.137831 | 2025-08-29 15:21:27.144243 | localhost | skipping: Conditional result was False 2025-08-29 15:21:27.152578 | 2025-08-29 15:21:27.152741 | LOOP [upload-logs : Upload console log and json output]